AI实践哪家强?来 AICon, 解锁技术前沿,探寻产业新机! 了解详情
写点什么

Filebeat 配置部署指南

  • 2020-11-01
  • 本文字数:7787 字

    阅读完需:约 26 分钟

Filebeat配置部署指南

在本文中,我们将了解如何配置 Filebeat 作为 DaemonSet 在我们的 Kubernetes 集群中运行,以便将日志运送到 Elasticsearch 后端。我们使用 Filebeat 而不是 FluentD 或 FluentBit,因为它是一个非常轻量级的实用程序,并且对 Kubernetes 有一流的支持,因此这是十分适合生产的配置。

部署架构

Filebeat 将在我们的 Kubernetes 集群中作为 DaemonSet 运行。它将会:


  • 部署在一个名为 logging 的单独的命名空间内

  • Pod 将会在 Master 节点和 Worker 节点被调度

  • master 节点 pods 将转发 api-server 日志,用于审计和集群管理。

  • 客户端节点 Pods 将转发工作负载相关的日志,用于应用程序可观察性

创建 Filebeat 服务账户和 ClusterRole

部署以下 manifest 以创建 Filebeat pod 所需的权限:


apiVersion: v1kind: Namespacemetadata:  name: logging---apiVersion: v1kind: ServiceAccountmetadata:  name: filebeat  namespace: logging  labels:    k8s-app: filebeat---apiVersion: rbac.authorization.k8s.io/v1beta1kind: ClusterRolemetadata:  name: filebeat  namespace: logging  labels:    k8s-app: filebeatrules:- apiGroups: [""] # "" indicates the core API group  resources:  - namespaces  - pods  verbs:  - get  - watch  - list---apiVersion: rbac.authorization.k8s.io/v1beta1kind: ClusterRoleBindingmetadata:  name: filebeat  namespace: loggingsubjects:- kind: ServiceAccount  name: filebeat  namespace: kube-systemroleRef:  kind: ClusterRole  name: filebeat  apiGroup: rbac.authorization.k8s.io
复制代码


我们应该从安全的角度出发,确保 ClusterRole 的权限尽可能地受到限制。如果与该服务账户相关联的任何一个 pod 被泄露,那么攻击者将无法获得对整个集群或其中运行的应用程序的访问权限。

创建 Filebeat ConfigMap

使用以下 manifest 来创建一个 ConfigMap,它将由 Filebeat pod 使用:


apiVersion: v1kind: Namespacemetadata:  name: logging---apiVersion: v1kind: ConfigMapmetadata:  name: filebeat-config  namespace: logging  labels:    k8s-app: filebeat    kubernetes.io/cluster-service: "true"data:  filebeat.yml: |-    filebeat.config:    #  inputs:    #    path: ${path.config}/inputs.d/*.yml    #    reload.enabled: true      modules:        path: ${path.config}/modules.d/*.yml        reload.enabled: true
filebeat.autodiscover: providers: - type: kubernetes hints.enabled: true include_annotations: ["artifact.spinnaker.io/name","ad.datadoghq.com/tags"] include_labels: ["app.kubernetes.io/name"] labels.dedot: true annotations.dedot: true templates: - condition: equals: kubernetes.namespace: myapp #Set the namespace in which your app is running, can add multiple conditions in case of more than 1 namespace. config: - type: docker containers.ids: - "${data.kubernetes.container.id}" multiline: pattern: '^[A-Za-z ]+[0-9]{2} (?:[01]\d|2[0123]):(?:[012345]\d):(?:[012345]\d)'. #Timestamp regex for the app logs. Change it as per format. negate: true match: after - condition: equals: kubernetes.namespace: elasticsearch config: - type: docker containers.ids: - "${data.kubernetes.container.id}" multiline: pattern: '^\[[0-9]{4}-[0-9]{2}-[0-9]{2}|^[0-9]{4}-[0-9]{2}-[0-9]{2}T' negate: true match: after processors: - add_cloud_metadata: ~ - drop_fields: when: has_fields: ['kubernetes.labels.app'] fields: - 'kubernetes.labels.app'
output.elasticsearch: hosts: ['${ELASTICSEARCH_HOST:elasticsearch}:${ELASTICSEARCH_PORT:9200}']
复制代码


关于 Filebeat ConfigMap 需要了解以下几个重要概念:


  • hints.enabled: 这将激活 Filebeat 的 Kubernetes 的 hint 模块。通过使用这个,我们可以使用 pod 注释直接将 config 传递给 Filebeat pod。我们可以指定不同的多行模式和其他各种类型的配置。更多关于这方面的内容可以访问以下链接:

  • https://www.elastic.co/guide/en/beats/filebeat/current/configuration-autodiscover-hints.html

  • include_annotations: 将此设置为 “true”,可以让 Filebeat 保留特定日志条目的任何 pod 注释。这些注释可以在以后用于在 Kibana 控制台中过滤日志。

  • include_labels: 将此设置为 “true”,可以让 Filebeat 保留特定日志条目的任何 pod 标签,这些标签以后可以用于在 Kibana 控制台中过滤日志。

  • 我们还可以针对特定的命名空间过滤日志,然后可以对日志条目进行相应的处理。这里使用的是 docker 日志处理器。我们也可以针对不同的命名空间使用不同的多行模式。

  • 因为我们使用 Elasticsearch 作为存储后端,所以输出设置为 Elasticsearch。另外,这也可以指向 Redis、Logstash、Kafka 甚至是一个 File。更多信息可以查看以下链接:

  • https://www.elastic.co/guide/en/beats/filebeat/current/configuring-output.html

  • 云元数据处理器在日志条目中包含一些特定主机的字段。当我们试图过滤特定 worker 节点的日志时,这很有帮助。

部署 Filebeat DaemonSet

使用以下 manifest 来部署 Filebeat DaemonSet:


apiVersion: v1kind: Namespacemetadata:  name: logging---apiVersion: extensions/v1beta1kind: DaemonSetmetadata:  name: filebeat  namespace: logging  labels:    k8s-app: filebeatspec:  template:    metadata:      labels:        k8s-app: filebeat    spec:      serviceAccountName: filebeat      terminationGracePeriodSeconds: 30      tolerations:      - effect: NoSchedule        key: node-role.kubernetes.io/master      containers:      - name: filebeat        image: elastic/filebeat:6.5.4        args: [          "-c", "/usr/share/filebeat/filebeat.yml",          "-e",        ]        env:        - name: ELASTICSEARCH_HOST          value: elasticsearch.elasticsearch        - name: ELASTICSEARCH_PORT          value: "9200"        securityContext:          runAsUser: 0          # If using Red Hat OpenShift uncomment this:          #privileged: true        resources:          limits:            memory: 200Mi          requests:            cpu: 100m            memory: 100Mi        volumeMounts:        - name: config          mountPath: /usr/share/filebeat/filebeat.yml          readOnly: true          subPath: filebeat.yml        - name: inputs          mountPath: /usr/share/filebeat/inputs.d          readOnly: true        - name: data          mountPath: /usr/share/filebeat/data        - name: varlibdockercontainers          mountPath: /var/lib/docker/containers          readOnly: true      volumes:      - name: config        configMap:          defaultMode: 0600          name: filebeat-config      - name: varlibdockercontainers        hostPath:          path: /var/lib/docker/containers      - name: inputs        configMap:          defaultMode: 0600          name: filebeat-inputs      # data folder stores a registry of read status for all files, so we don't send everything again on a Filebeat pod restart      - name: data        hostPath:          path: /var/lib/filebeat-data          type: DirectoryOrCreate---
复制代码


让我们来看看这里发生了什么:


  • 每个 pod 的日志都被写入 /var/log/docker/containers 。我们将这个目录从主机挂载到 Filebeat pod 上,然后 Filebeat 根据提供的配置处理日志。

  • 我们将环境变量 ELASTICSEARCH_HOST 设置为 elasticsearch.elasticsearch,以引用本教程第一部分创建的 Elasticsearch 客户端服务。如果你已经有一个 Elasticsearch 集群在运行,环境变量应该设置为指向它。


请注意 manifest 中的以下设置:


...      tolerations:      - effect: NoSchedule        key: node-role.kubernetes.io/master...
复制代码


这确保了我们的 Filebeat DaemonSet 也会在 master 节点上调度一个 pod。一旦部署了 Filebeat DaemonSet,我们就可以检查我们的 pod 是否被正确调度。


root$ kubectl -n logging get pods  -o wideNAME             READY   STATUS    RESTARTS   AGE   IP            NODE                                         NOMINATED NODE   READINESS GATESfilebeat-4kchs   1/1     Running   0          6d    100.96.8.2    ip-10-10-30-206.us-east-2.compute.internal   <none>           <none>filebeat-6nrpc   1/1     Running   0          6d    100.96.7.6    ip-10-10-29-252.us-east-2.compute.internal   <none>           <none>filebeat-7qs2s   1/1     Running   0          6d    100.96.1.6    ip-10-10-30-161.us-east-2.compute.internal   <none>           <none>filebeat-j5xz6   1/1     Running   0          6d    100.96.5.3    ip-10-10-28-186.us-east-2.compute.internal   <none>           <none>filebeat-pskg5   1/1     Running   0          6d    100.96.64.4   ip-10-10-29-142.us-east-2.compute.internal   <none>           <none>filebeat-vjdtg   1/1     Running   0          6d    100.96.65.3   ip-10-10-30-118.us-east-2.compute.internal   <none>           <none>filebeat-wm24j   1/1     Running   0          6d    100.96.0.4    ip-10-10-28-162.us-east-2.compute.internal   <none>           <none>
root$ kubectl -get nodes -o wideNAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIMEip-10-10-28-162.us-east-2.compute.internal Ready master 6d v1.14.8 10.10.28.162 <none> Debian GNU/Linux 9 (stretch) 4.9.0-9-amd64 docker://18.6.3ip-10-10-28-186.us-east-2.compute.internal Ready node 6d v1.14.8 10.10.28.186 <none> Debian GNU/Linux 9 (stretch) 4.9.0-9-amd64 docker://18.6.3ip-10-10-29-142.us-east-2.compute.internal Ready master 6d v1.14.8 10.10.29.142 <none> Debian GNU/Linux 9 (stretch) 4.9.0-9-amd64 docker://18.6.3ip-10-10-29-252.us-east-2.compute.internal Ready node 6d v1.14.8 10.10.29.252 <none> Debian GNU/Linux 9 (stretch) 4.9.0-9-amd64 docker://18.6.3ip-10-10-30-118.us-east-2.compute.internal Ready master 6d v1.14.8 10.10.30.118 <none> Debian GNU/Linux 9 (stretch) 4.9.0-9-amd64 docker://18.6.3ip-10-10-30-161.us-east-2.compute.internal Ready node 6d v1.14.8 10.10.30.161 <none> Debian GNU/Linux 9 (stretch) 4.9.0-9-amd64 docker://18.6.3ip-10-10-30-206.us-east-2.compute.internal Ready node 6d v1.14.8 10.10.30.206
复制代码


如果我们跟踪其中一个 pod 的日志,我们可以清楚地看到它连接到 Elasticsearch,并且已经启动了文件的 harvester。以下代码段可以看到这一点:


2019-11-19T06:22:03.435Z  INFO  log/input.go:138  Configured paths: [/var/lib/docker/containers/c2b29f5e06eb8affb2cce7cf2501f6f824a2fd83418d09823faf4e74a5a51eb7/*.log]2019-11-19T06:22:03.435Z  INFO  input/input.go:114  Starting input of type: docker; ID: 4134444498769889169 2019-11-19T06:22:04.786Z  INFO  input/input.go:149  input ticker stopped2019-11-19T06:22:04.786Z  INFO  input/input.go:167  Stopping Input: 41344444987698891692019-11-19T06:22:19.295Z  INFO  [monitoring]  log/log.go:144  Non-zero metrics in the last 30s  {"monitoring": {"metrics": {"beat":{"cpu":{"system":{"ticks":641680,"time":{"ms":16}},"total":{"ticks":2471920,"time":{"ms":180},"value":2471920},"user":{"ticks":1830240,"time":{"ms":164}}},"handles":{"limit":{"hard":1048576,"soft":1048576},"open":20},"info":{"ephemeral_id":"007e8090-7c62-4b44-97fb-e74e8177dc54","uptime":{"ms":549390018}},"memstats":{"gc_next":47281968,"memory_alloc":29021760,"memory_total":156062982472}},"filebeat":{"events":{"added":111,"done":111},"harvester":{"closed":2,"open_files":15,"running":13}},"libbeat":{"config":{"module":{"running":0}},"output":{"events":{"acked":108,"batches":15,"total":108},"read":{"bytes":69},"write":{"bytes":123536}},"pipeline":{"clients":1847,"events":{"active":0,"filtered":3,"published":108,"total":111},"queue":{"acked":108}}},"registrar":{"states":{"current":87,"update":111},"writes":{"success":18,"total":18}},"system":{"load":{"1":0.98,"15":1.71,"5":1.59,"norm":{"1":0.0613,"15":0.1069,"5":0.0994}}}}}}
2019-11-19T06:22:49.295Z INFO [monitoring] log/log.go:144 Non-zero metrics in the last 30s {"monitoring": {"metrics": {"beat":{"cpu":{"system":{"ticks":641720,"time":{"ms":44}},"total":{"ticks":2472030,"time":{"ms":116},"value":2472030},"user":{"ticks":1830310,"time":{"ms":72}}},"handles":{"limit":{"hard":1048576,"soft":1048576},"open":20},"info":{"ephemeral_id":"007e8090-7c62-4b44-97fb-e74e8177dc54","uptime":{"ms":549420018}},"memstats":{"gc_next":47281968,"memory_alloc":38715472,"memory_total":156072676184}},"filebeat":{"events":{"active":12,"added":218,"done":206},"harvester":{"open_files":15,"running":13}},"libbeat":{"config":{"module":{"running":0}},"output":{"events":{"acked":206,"batches":24,"total":206},"read":{"bytes":102},"write":{"bytes":269666}},"pipeline":{"clients":1847,"events":{"active":12,"published":218,"total":218},"queue":{"acked":206}}},"registrar":{"states":{"current":87,"update":206},"writes":{"success":24,"total":24}},"system":{"load":{"1":1.22,"15":1.7,"5":1.58,"norm":{"1":0.0763,"15":0.1063,"5":0.0988}}}}}}
2019-11-19T06:23:19.295Z INFO [monitoring] log/log.go:144 Non-zero metrics in the last 30s {"monitoring": {"metrics": {"beat":{"cpu":{"system":{"ticks":641750,"time":{"ms":28}},"total":{"ticks":2472110,"time":{"ms":72},"value":2472110},"user":{"ticks":1830360,"time":{"ms":44}}},"handles":{"limit":{"hard":1048576,"soft":1048576},"open":20},"info":{"ephemeral_id":"007e8090-7c62-4b44-97fb-e74e8177dc54","uptime":{"ms":549450017}},"memstats":{"gc_next":47281968,"memory_alloc":43140256,"memory_total":156077100968}},"filebeat":{"events":{"active":-12,"added":43,"done":55},"harvester":{"open_files":15,"running":13}},"libbeat":{"config":{"module":{"running":0}},"output":{"events":{"acked":55,"batches":12,"total":55},"read":{"bytes":51},"write":{"bytes":70798}},"pipeline":{"clients":1847,"events":{"active":0,"published":43,"total":43},"queue":{"acked":55}}},"registrar":{"states":{"current":87,"update":55},"writes":{"success":12,"total":12}},"system":{"load":{"1":0.99,"15":1.67,"5":1.49,"norm":{"1":0.0619,"15":0.1044,"5":0.0931}}}}}}
2019-11-19T06:23:25.261Z INFO log/harvester.go:255 Harvester started for file: /var/lib/docker/containers/ccb7dc75ecc755734f6befc4965b9fdae74d59810914101eadf63daa69eb62e2/ccb7dc75ecc755734f6befc4965b9fdae74d59810914101eadf63daa69eb62e2-json.log
2019-11-19T06:23:49.295Z INFO [monitoring] log/log.go:144 Non-zero metrics in the last 30s {"monitoring": {"metrics": {"beat":{"cpu":{"system":{"ticks":641780,"time":{"ms":28}},"total":{"ticks":2472310,"time":{"ms":196},"value":2472310},"user":{"ticks":1830530,"time":{"ms":168}}},"handles":{"limit":{"hard":1048576,"soft":1048576},"open":21},"info":{"ephemeral_id":"007e8090-7c62-4b44-97fb-e74e8177dc54","uptime":{"ms":549480018}},"memstats":{"gc_next":47789200,"memory_alloc":31372376,"memory_total":156086697176,"rss":-1064960}},"filebeat":{"events":{"active":16,"added":170,"done":154},"harvester":{"open_files":16,"running":14,"started":1}},"libbeat":{"config":{"module":{"running":0}},"output":{"events":{"acked":153,"batches":24,"total":153},"read":{"bytes":115},"write":{"bytes":207569}},"pipeline":{"clients":1847,"events":{"active":16,"filtered":1,"published":169,"total":170},"queue":{"acked":153}}},"registrar":{"states":{"current":87,"update":154},"writes":{"success":25,"total":25}},"system":{"load":{"1":0.87,"15":1.63,"5":1.41,"norm":{"1":0.0544,"15":0.1019,"5":0.0881}}}}}}
复制代码


一旦我们运行了所有的 pods,那么我们就可以在 Kibana 中创建一个 filebeat-* 类型的索引模式。Filebeat 索引一般都是有时间戳的。只要我们创建了索引模式,就可以看到所有可搜索的可用字段,并导入。最后,我们可以搜索我们的应用程序日志,并在需要时创建 dashboard。强烈建议在我们的应用程序中使用 JSON logger,因为它使日志处理变得非常容易,并且可以轻松地解析消息。

总结

我们的日志堆栈方案配置到此结束。所有提供的配置文件都已经在生产环境中进行了测试,并且是可以随时部署的。欢迎实践,也十分欢迎你向我们分享关于 Kubernetes 的各种实践。


原文链接:

https://appfleet.com/blog/part-2-efk-stack-on-kubernetes/


本文转载自公众号 RancherLabs(ID:RancherLabs)。


原文链接


Filebeat配置部署指南


2020-11-01 14:005611

评论

发布
暂无评论
发现更多内容

实践GoF的设计模式:工厂方法模式

华为云开发者联盟

设计模式 工厂方法模式

图分析的22种算法与图形理解

清林情报分析师

数据分析 知识图谱 图算法 图论 知识结构

gitlab 8.13.6添加server hook后保护分支失效

阿呆

#GitLab gitlab hook 保护分支

读书笔记之怪诞行为学6:非凡的决定

宇宙之一粟

读书笔记 5月月更

Java Core「6」反射与SPI机制

Samson

学习笔记 5月月更 Java core

一文学完Linux Shell编程,比书都好懂

编程攻略

Linux

HTML语法基本规范

恒山其若陋兮

5月月更

企业智能化转型meetup回顾|开源BI & AI助力企业转型之旅三阶段!

第四范式开发者社区

人工智能 开源 企业 大数据平台 智能化转型

OpenClusterManagement 开源之夏 2022 来了

阿里巴巴云原生

阿里云 云原生 开源之夏

LabVIEW控制Arduino LED灯闪烁(基础篇—2)

不脱发的程序猿

单片机 LabVIEW Arduino LED灯闪烁 LIAT

JDK 15 以上版本的字符串块

HoneyMoose

漫画 | 新一代软件架构会影响到谁?

阿里巴巴云原生

阿里云 云原生 事件总线 EventBridge

SAAS服务的特点

Geek_99967b

小程序 SaaS

实现内网穿透(二)

风斩断晚霞

Go websocket

将微信小程序生成商用App很简单吗?

Geek_99967b

ide 小程序转app 小程序预览

这道静态变量题,我居然考了0分

华为云开发者联盟

Java 静态变量 Java static

浅谈Http,Https

工程师日月

HTTP 5月月更

智能手表的机遇与挑战并存

Geek_99967b

数据安全 物联网, 智能手表

AIRIOT物联网低代码平台如何配置Modbus TCP协议?

AIRIOT

低代码平台 驱动配置

Linux环境混合使用静态库与动态库

Loken

音视频 5月月更

基于信息检索和深度学习结合的单元测试用例断言自动生成

华为云开发者联盟

深度学习 单元测试 信息检索

中兴通讯加入龙蜥社区,共建ICT全场景开源生态

OpenAnolis小助手

开源 生态 龙蜥社区 CLA 中兴通讯

Python图像处理丨图像缩放、旋转、翻转与图像平移

华为云开发者联盟

Python 图像平移 图像缩放

JVM 线上问题定位实战(CPU 飙升)

Ayue、

JVM

一、KVM虚拟化的功能特性

穿过生命散发芬芳

kvm 5月月更

怎样为自己的小程序打包为App

Geek_99967b

小程序 小程序转app

一种直流电池/电源正反接均可供电的电路方案

不脱发的程序猿

电路设计 电源电路 嵌入式硬件 直流电池/电源正反接电路

LabVIEW和Arduino的巧妙结合(基础篇—1)

不脱发的程序猿

单片机 LabVIEW Arduino 上位机

Java 8 中的设计模式策略

HoneyMoose

相较国外代码托管平台 gitlab,咱们中国自己的代码托管平台有哪些优势?

阿里云云效

云计算 阿里云 代码管理 代码托管 代码安全

车联网的发展面临的难点怎样突破

Geek_99967b

小程序 车联网

Filebeat配置部署指南_架构_Rancher_InfoQ精选文章