写点什么

Filebeat 配置部署指南

  • 2020-11-01
  • 本文字数:7787 字

    阅读完需:约 26 分钟

Filebeat配置部署指南

在本文中,我们将了解如何配置 Filebeat 作为 DaemonSet 在我们的 Kubernetes 集群中运行,以便将日志运送到 Elasticsearch 后端。我们使用 Filebeat 而不是 FluentD 或 FluentBit,因为它是一个非常轻量级的实用程序,并且对 Kubernetes 有一流的支持,因此这是十分适合生产的配置。

部署架构

Filebeat 将在我们的 Kubernetes 集群中作为 DaemonSet 运行。它将会:


  • 部署在一个名为 logging 的单独的命名空间内

  • Pod 将会在 Master 节点和 Worker 节点被调度

  • master 节点 pods 将转发 api-server 日志,用于审计和集群管理。

  • 客户端节点 Pods 将转发工作负载相关的日志,用于应用程序可观察性

创建 Filebeat 服务账户和 ClusterRole

部署以下 manifest 以创建 Filebeat pod 所需的权限:


apiVersion: v1kind: Namespacemetadata:  name: logging---apiVersion: v1kind: ServiceAccountmetadata:  name: filebeat  namespace: logging  labels:    k8s-app: filebeat---apiVersion: rbac.authorization.k8s.io/v1beta1kind: ClusterRolemetadata:  name: filebeat  namespace: logging  labels:    k8s-app: filebeatrules:- apiGroups: [""] # "" indicates the core API group  resources:  - namespaces  - pods  verbs:  - get  - watch  - list---apiVersion: rbac.authorization.k8s.io/v1beta1kind: ClusterRoleBindingmetadata:  name: filebeat  namespace: loggingsubjects:- kind: ServiceAccount  name: filebeat  namespace: kube-systemroleRef:  kind: ClusterRole  name: filebeat  apiGroup: rbac.authorization.k8s.io
复制代码


我们应该从安全的角度出发,确保 ClusterRole 的权限尽可能地受到限制。如果与该服务账户相关联的任何一个 pod 被泄露,那么攻击者将无法获得对整个集群或其中运行的应用程序的访问权限。

创建 Filebeat ConfigMap

使用以下 manifest 来创建一个 ConfigMap,它将由 Filebeat pod 使用:


apiVersion: v1kind: Namespacemetadata:  name: logging---apiVersion: v1kind: ConfigMapmetadata:  name: filebeat-config  namespace: logging  labels:    k8s-app: filebeat    kubernetes.io/cluster-service: "true"data:  filebeat.yml: |-    filebeat.config:    #  inputs:    #    path: ${path.config}/inputs.d/*.yml    #    reload.enabled: true      modules:        path: ${path.config}/modules.d/*.yml        reload.enabled: true
filebeat.autodiscover: providers: - type: kubernetes hints.enabled: true include_annotations: ["artifact.spinnaker.io/name","ad.datadoghq.com/tags"] include_labels: ["app.kubernetes.io/name"] labels.dedot: true annotations.dedot: true templates: - condition: equals: kubernetes.namespace: myapp #Set the namespace in which your app is running, can add multiple conditions in case of more than 1 namespace. config: - type: docker containers.ids: - "${data.kubernetes.container.id}" multiline: pattern: '^[A-Za-z ]+[0-9]{2} (?:[01]\d|2[0123]):(?:[012345]\d):(?:[012345]\d)'. #Timestamp regex for the app logs. Change it as per format. negate: true match: after - condition: equals: kubernetes.namespace: elasticsearch config: - type: docker containers.ids: - "${data.kubernetes.container.id}" multiline: pattern: '^\[[0-9]{4}-[0-9]{2}-[0-9]{2}|^[0-9]{4}-[0-9]{2}-[0-9]{2}T' negate: true match: after processors: - add_cloud_metadata: ~ - drop_fields: when: has_fields: ['kubernetes.labels.app'] fields: - 'kubernetes.labels.app'
output.elasticsearch: hosts: ['${ELASTICSEARCH_HOST:elasticsearch}:${ELASTICSEARCH_PORT:9200}']
复制代码


关于 Filebeat ConfigMap 需要了解以下几个重要概念:


  • hints.enabled: 这将激活 Filebeat 的 Kubernetes 的 hint 模块。通过使用这个,我们可以使用 pod 注释直接将 config 传递给 Filebeat pod。我们可以指定不同的多行模式和其他各种类型的配置。更多关于这方面的内容可以访问以下链接:

  • https://www.elastic.co/guide/en/beats/filebeat/current/configuration-autodiscover-hints.html

  • include_annotations: 将此设置为 “true”,可以让 Filebeat 保留特定日志条目的任何 pod 注释。这些注释可以在以后用于在 Kibana 控制台中过滤日志。

  • include_labels: 将此设置为 “true”,可以让 Filebeat 保留特定日志条目的任何 pod 标签,这些标签以后可以用于在 Kibana 控制台中过滤日志。

  • 我们还可以针对特定的命名空间过滤日志,然后可以对日志条目进行相应的处理。这里使用的是 docker 日志处理器。我们也可以针对不同的命名空间使用不同的多行模式。

  • 因为我们使用 Elasticsearch 作为存储后端,所以输出设置为 Elasticsearch。另外,这也可以指向 Redis、Logstash、Kafka 甚至是一个 File。更多信息可以查看以下链接:

  • https://www.elastic.co/guide/en/beats/filebeat/current/configuring-output.html

  • 云元数据处理器在日志条目中包含一些特定主机的字段。当我们试图过滤特定 worker 节点的日志时,这很有帮助。

部署 Filebeat DaemonSet

使用以下 manifest 来部署 Filebeat DaemonSet:


apiVersion: v1kind: Namespacemetadata:  name: logging---apiVersion: extensions/v1beta1kind: DaemonSetmetadata:  name: filebeat  namespace: logging  labels:    k8s-app: filebeatspec:  template:    metadata:      labels:        k8s-app: filebeat    spec:      serviceAccountName: filebeat      terminationGracePeriodSeconds: 30      tolerations:      - effect: NoSchedule        key: node-role.kubernetes.io/master      containers:      - name: filebeat        image: elastic/filebeat:6.5.4        args: [          "-c", "/usr/share/filebeat/filebeat.yml",          "-e",        ]        env:        - name: ELASTICSEARCH_HOST          value: elasticsearch.elasticsearch        - name: ELASTICSEARCH_PORT          value: "9200"        securityContext:          runAsUser: 0          # If using Red Hat OpenShift uncomment this:          #privileged: true        resources:          limits:            memory: 200Mi          requests:            cpu: 100m            memory: 100Mi        volumeMounts:        - name: config          mountPath: /usr/share/filebeat/filebeat.yml          readOnly: true          subPath: filebeat.yml        - name: inputs          mountPath: /usr/share/filebeat/inputs.d          readOnly: true        - name: data          mountPath: /usr/share/filebeat/data        - name: varlibdockercontainers          mountPath: /var/lib/docker/containers          readOnly: true      volumes:      - name: config        configMap:          defaultMode: 0600          name: filebeat-config      - name: varlibdockercontainers        hostPath:          path: /var/lib/docker/containers      - name: inputs        configMap:          defaultMode: 0600          name: filebeat-inputs      # data folder stores a registry of read status for all files, so we don't send everything again on a Filebeat pod restart      - name: data        hostPath:          path: /var/lib/filebeat-data          type: DirectoryOrCreate---
复制代码


让我们来看看这里发生了什么:


  • 每个 pod 的日志都被写入 /var/log/docker/containers 。我们将这个目录从主机挂载到 Filebeat pod 上,然后 Filebeat 根据提供的配置处理日志。

  • 我们将环境变量 ELASTICSEARCH_HOST 设置为 elasticsearch.elasticsearch,以引用本教程第一部分创建的 Elasticsearch 客户端服务。如果你已经有一个 Elasticsearch 集群在运行,环境变量应该设置为指向它。


请注意 manifest 中的以下设置:


...      tolerations:      - effect: NoSchedule        key: node-role.kubernetes.io/master...
复制代码


这确保了我们的 Filebeat DaemonSet 也会在 master 节点上调度一个 pod。一旦部署了 Filebeat DaemonSet,我们就可以检查我们的 pod 是否被正确调度。


root$ kubectl -n logging get pods  -o wideNAME             READY   STATUS    RESTARTS   AGE   IP            NODE                                         NOMINATED NODE   READINESS GATESfilebeat-4kchs   1/1     Running   0          6d    100.96.8.2    ip-10-10-30-206.us-east-2.compute.internal   <none>           <none>filebeat-6nrpc   1/1     Running   0          6d    100.96.7.6    ip-10-10-29-252.us-east-2.compute.internal   <none>           <none>filebeat-7qs2s   1/1     Running   0          6d    100.96.1.6    ip-10-10-30-161.us-east-2.compute.internal   <none>           <none>filebeat-j5xz6   1/1     Running   0          6d    100.96.5.3    ip-10-10-28-186.us-east-2.compute.internal   <none>           <none>filebeat-pskg5   1/1     Running   0          6d    100.96.64.4   ip-10-10-29-142.us-east-2.compute.internal   <none>           <none>filebeat-vjdtg   1/1     Running   0          6d    100.96.65.3   ip-10-10-30-118.us-east-2.compute.internal   <none>           <none>filebeat-wm24j   1/1     Running   0          6d    100.96.0.4    ip-10-10-28-162.us-east-2.compute.internal   <none>           <none>
root$ kubectl -get nodes -o wideNAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIMEip-10-10-28-162.us-east-2.compute.internal Ready master 6d v1.14.8 10.10.28.162 <none> Debian GNU/Linux 9 (stretch) 4.9.0-9-amd64 docker://18.6.3ip-10-10-28-186.us-east-2.compute.internal Ready node 6d v1.14.8 10.10.28.186 <none> Debian GNU/Linux 9 (stretch) 4.9.0-9-amd64 docker://18.6.3ip-10-10-29-142.us-east-2.compute.internal Ready master 6d v1.14.8 10.10.29.142 <none> Debian GNU/Linux 9 (stretch) 4.9.0-9-amd64 docker://18.6.3ip-10-10-29-252.us-east-2.compute.internal Ready node 6d v1.14.8 10.10.29.252 <none> Debian GNU/Linux 9 (stretch) 4.9.0-9-amd64 docker://18.6.3ip-10-10-30-118.us-east-2.compute.internal Ready master 6d v1.14.8 10.10.30.118 <none> Debian GNU/Linux 9 (stretch) 4.9.0-9-amd64 docker://18.6.3ip-10-10-30-161.us-east-2.compute.internal Ready node 6d v1.14.8 10.10.30.161 <none> Debian GNU/Linux 9 (stretch) 4.9.0-9-amd64 docker://18.6.3ip-10-10-30-206.us-east-2.compute.internal Ready node 6d v1.14.8 10.10.30.206
复制代码


如果我们跟踪其中一个 pod 的日志,我们可以清楚地看到它连接到 Elasticsearch,并且已经启动了文件的 harvester。以下代码段可以看到这一点:


2019-11-19T06:22:03.435Z  INFO  log/input.go:138  Configured paths: [/var/lib/docker/containers/c2b29f5e06eb8affb2cce7cf2501f6f824a2fd83418d09823faf4e74a5a51eb7/*.log]2019-11-19T06:22:03.435Z  INFO  input/input.go:114  Starting input of type: docker; ID: 4134444498769889169 2019-11-19T06:22:04.786Z  INFO  input/input.go:149  input ticker stopped2019-11-19T06:22:04.786Z  INFO  input/input.go:167  Stopping Input: 41344444987698891692019-11-19T06:22:19.295Z  INFO  [monitoring]  log/log.go:144  Non-zero metrics in the last 30s  {"monitoring": {"metrics": {"beat":{"cpu":{"system":{"ticks":641680,"time":{"ms":16}},"total":{"ticks":2471920,"time":{"ms":180},"value":2471920},"user":{"ticks":1830240,"time":{"ms":164}}},"handles":{"limit":{"hard":1048576,"soft":1048576},"open":20},"info":{"ephemeral_id":"007e8090-7c62-4b44-97fb-e74e8177dc54","uptime":{"ms":549390018}},"memstats":{"gc_next":47281968,"memory_alloc":29021760,"memory_total":156062982472}},"filebeat":{"events":{"added":111,"done":111},"harvester":{"closed":2,"open_files":15,"running":13}},"libbeat":{"config":{"module":{"running":0}},"output":{"events":{"acked":108,"batches":15,"total":108},"read":{"bytes":69},"write":{"bytes":123536}},"pipeline":{"clients":1847,"events":{"active":0,"filtered":3,"published":108,"total":111},"queue":{"acked":108}}},"registrar":{"states":{"current":87,"update":111},"writes":{"success":18,"total":18}},"system":{"load":{"1":0.98,"15":1.71,"5":1.59,"norm":{"1":0.0613,"15":0.1069,"5":0.0994}}}}}}
2019-11-19T06:22:49.295Z INFO [monitoring] log/log.go:144 Non-zero metrics in the last 30s {"monitoring": {"metrics": {"beat":{"cpu":{"system":{"ticks":641720,"time":{"ms":44}},"total":{"ticks":2472030,"time":{"ms":116},"value":2472030},"user":{"ticks":1830310,"time":{"ms":72}}},"handles":{"limit":{"hard":1048576,"soft":1048576},"open":20},"info":{"ephemeral_id":"007e8090-7c62-4b44-97fb-e74e8177dc54","uptime":{"ms":549420018}},"memstats":{"gc_next":47281968,"memory_alloc":38715472,"memory_total":156072676184}},"filebeat":{"events":{"active":12,"added":218,"done":206},"harvester":{"open_files":15,"running":13}},"libbeat":{"config":{"module":{"running":0}},"output":{"events":{"acked":206,"batches":24,"total":206},"read":{"bytes":102},"write":{"bytes":269666}},"pipeline":{"clients":1847,"events":{"active":12,"published":218,"total":218},"queue":{"acked":206}}},"registrar":{"states":{"current":87,"update":206},"writes":{"success":24,"total":24}},"system":{"load":{"1":1.22,"15":1.7,"5":1.58,"norm":{"1":0.0763,"15":0.1063,"5":0.0988}}}}}}
2019-11-19T06:23:19.295Z INFO [monitoring] log/log.go:144 Non-zero metrics in the last 30s {"monitoring": {"metrics": {"beat":{"cpu":{"system":{"ticks":641750,"time":{"ms":28}},"total":{"ticks":2472110,"time":{"ms":72},"value":2472110},"user":{"ticks":1830360,"time":{"ms":44}}},"handles":{"limit":{"hard":1048576,"soft":1048576},"open":20},"info":{"ephemeral_id":"007e8090-7c62-4b44-97fb-e74e8177dc54","uptime":{"ms":549450017}},"memstats":{"gc_next":47281968,"memory_alloc":43140256,"memory_total":156077100968}},"filebeat":{"events":{"active":-12,"added":43,"done":55},"harvester":{"open_files":15,"running":13}},"libbeat":{"config":{"module":{"running":0}},"output":{"events":{"acked":55,"batches":12,"total":55},"read":{"bytes":51},"write":{"bytes":70798}},"pipeline":{"clients":1847,"events":{"active":0,"published":43,"total":43},"queue":{"acked":55}}},"registrar":{"states":{"current":87,"update":55},"writes":{"success":12,"total":12}},"system":{"load":{"1":0.99,"15":1.67,"5":1.49,"norm":{"1":0.0619,"15":0.1044,"5":0.0931}}}}}}
2019-11-19T06:23:25.261Z INFO log/harvester.go:255 Harvester started for file: /var/lib/docker/containers/ccb7dc75ecc755734f6befc4965b9fdae74d59810914101eadf63daa69eb62e2/ccb7dc75ecc755734f6befc4965b9fdae74d59810914101eadf63daa69eb62e2-json.log
2019-11-19T06:23:49.295Z INFO [monitoring] log/log.go:144 Non-zero metrics in the last 30s {"monitoring": {"metrics": {"beat":{"cpu":{"system":{"ticks":641780,"time":{"ms":28}},"total":{"ticks":2472310,"time":{"ms":196},"value":2472310},"user":{"ticks":1830530,"time":{"ms":168}}},"handles":{"limit":{"hard":1048576,"soft":1048576},"open":21},"info":{"ephemeral_id":"007e8090-7c62-4b44-97fb-e74e8177dc54","uptime":{"ms":549480018}},"memstats":{"gc_next":47789200,"memory_alloc":31372376,"memory_total":156086697176,"rss":-1064960}},"filebeat":{"events":{"active":16,"added":170,"done":154},"harvester":{"open_files":16,"running":14,"started":1}},"libbeat":{"config":{"module":{"running":0}},"output":{"events":{"acked":153,"batches":24,"total":153},"read":{"bytes":115},"write":{"bytes":207569}},"pipeline":{"clients":1847,"events":{"active":16,"filtered":1,"published":169,"total":170},"queue":{"acked":153}}},"registrar":{"states":{"current":87,"update":154},"writes":{"success":25,"total":25}},"system":{"load":{"1":0.87,"15":1.63,"5":1.41,"norm":{"1":0.0544,"15":0.1019,"5":0.0881}}}}}}
复制代码


一旦我们运行了所有的 pods,那么我们就可以在 Kibana 中创建一个 filebeat-* 类型的索引模式。Filebeat 索引一般都是有时间戳的。只要我们创建了索引模式,就可以看到所有可搜索的可用字段,并导入。最后,我们可以搜索我们的应用程序日志,并在需要时创建 dashboard。强烈建议在我们的应用程序中使用 JSON logger,因为它使日志处理变得非常容易,并且可以轻松地解析消息。

总结

我们的日志堆栈方案配置到此结束。所有提供的配置文件都已经在生产环境中进行了测试,并且是可以随时部署的。欢迎实践,也十分欢迎你向我们分享关于 Kubernetes 的各种实践。


原文链接:

https://appfleet.com/blog/part-2-efk-stack-on-kubernetes/


本文转载自公众号 RancherLabs(ID:RancherLabs)。


原文链接


Filebeat配置部署指南


2020-11-01 14:006416

评论

发布
暂无评论
发现更多内容

对象的内存分配一定都是在堆空间吗?

领创集团Advance Intelligence Group

代码优化 内存分配

大数据培训Hive的数据存储与压缩

@零度

hive 大数据开发

【Java 基础你一定要掌握的知识点】Java异常处理和设计

猫的树

Java 异常处理

nacos注册中心之客户端服务注册

急需上岸的小谢

7月月更

全国首创!洞见科技联合山东数据制定的「数据产品登记」两项标准正式发布

洞见科技

数据 联邦学习 隐私计算

【Docker 那些事儿】关于容器底层技术的奥秘

Albert Edison

7月月更

亚信科技发布“电信级”核心交易数据库AntDB7.0,助力政企“信”创未来!

亚信AntDB数据库

AntDB 国产数据库 产品发布会

跟着官方文档学 Python 之:函数

甜甜的白桃

Python 递归 函数 参数 7月月更

Github发布6天,Star55K+,这套笔记足够你拿下90%的Java面试

冉然学Java

java面试 #Github

【都 Java17 了,还不了解 Java 8 ? 】一文带你深入了解 Java 8 新特性

猫的树

java8

企事业单位该如何建设知识管理体系

Baklib

Go 原生插件使用问题全解析

SOFAStack

Go 语言 开源软件 MOSN 问题解析 开源学习

java培训之Java8 Stream 代码简化是如何实现的

@零度

stream JAVA开发

系统首页 DIY,你的个性化需求 Pro 系统来满足!

CRMEB

Windows下Tomcat内存占用过高问题跟踪(jmap 的使用)

源字节1号

软件开发 小程序开发

数据分析引擎百花齐放,为什么要大力投入ClickHouse?

字节跳动数据平台

龙蜥下游发行版 Alinux 和 UOS 成为 OpenSCAP 官方首批支持的国内 OS

OpenAnolis小助手

国产 龙蜥操作系统 UOS v20 OpenSCAP Alinux 2/3

FAQ制作工具推荐

Baklib

Ceph集群详细部署配置图文讲解(二)

Lansonli

云原生 私有云 Ceph 云存储 7月月更

答应我忘掉Postman吧,Apifox才是yyds!

程序员小毕

Java 程序员 程序人生 后端 开发工具

太奇葩了!Keepalived突发高可用事故

Java全栈架构师

Java 程序员 面试 程序人生 Keepalived

LP单双币质押流动性挖矿系统开发

开发微hkkf5566

【Java 实战】实现大转盘抽奖

猫的树

Java 大转盘抽奖

web前端培训nodejs异步IO

@零度

node.js 前端开发

一道2016年nice的笔试题引发的思考

芒果酱

7月月更

NFT数字藏品交易平台APP开发搭建

开发微hkkf5566

开源仓库贡献 —— 提交 PR

攻城狮杰森

git GitHub PR 开源贡献 7月月更

为什么说企业需要具备企业知识管理的能力?

Baklib

http请求redirect的问题

飞翔

golang

Ceph Swift Api 配置与使用(三)

Lansonli

云原生 Ceph 云存储 7月月更

leetcode 135. Candy 分发糖果(困难)

okokabcd

LeetCode 贪心算法 算法与数据结构

Filebeat配置部署指南_架构_Rancher_InfoQ精选文章