在本文中,我们将了解如何配置 Filebeat 作为 DaemonSet 在我们的 Kubernetes 集群中运行,以便将日志运送到 Elasticsearch 后端。我们使用 Filebeat 而不是 FluentD 或 FluentBit,因为它是一个非常轻量级的实用程序,并且对 Kubernetes 有一流的支持,因此这是十分适合生产的配置。
部署架构
Filebeat 将在我们的 Kubernetes 集群中作为 DaemonSet 运行。它将会:
部署在一个名为 logging 的单独的命名空间内
Pod 将会在 Master 节点和 Worker 节点被调度
master 节点 pods 将转发 api-server 日志,用于审计和集群管理。
客户端节点 Pods 将转发工作负载相关的日志,用于应用程序可观察性
创建 Filebeat 服务账户和 ClusterRole
部署以下 manifest 以创建 Filebeat pod 所需的权限:
apiVersion: v1
kind: Namespace
metadata:
name: logging
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: filebeat
namespace: logging
labels:
k8s-app: filebeat
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
name: filebeat
namespace: logging
labels:
k8s-app: filebeat
rules:
- apiGroups: [""] # "" indicates the core API group
resources:
- namespaces
- pods
verbs:
- get
- watch
- list
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: filebeat
namespace: logging
subjects:
- kind: ServiceAccount
name: filebeat
namespace: kube-system
roleRef:
kind: ClusterRole
name: filebeat
apiGroup: rbac.authorization.k8s.io
我们应该从安全的角度出发,确保 ClusterRole 的权限尽可能地受到限制。如果与该服务账户相关联的任何一个 pod 被泄露,那么攻击者将无法获得对整个集群或其中运行的应用程序的访问权限。
创建 Filebeat ConfigMap
使用以下 manifest 来创建一个 ConfigMap,它将由 Filebeat pod 使用:
apiVersion: v1
kind: Namespace
metadata:
name: logging
---
apiVersion: v1
kind: ConfigMap
metadata:
name: filebeat-config
namespace: logging
labels:
k8s-app: filebeat
kubernetes.io/cluster-service: "true"
data:
filebeat.yml: |-
filebeat.config:
# inputs:
# path: ${path.config}/inputs.d/*.yml
# reload.enabled: true
modules:
path: ${path.config}/modules.d/*.yml
reload.enabled: true
filebeat.autodiscover:
providers:
- type: kubernetes
hints.enabled: true
include_annotations: ["artifact.spinnaker.io/name","ad.datadoghq.com/tags"]
include_labels: ["app.kubernetes.io/name"]
labels.dedot: true
annotations.dedot: true
templates:
- condition:
equals:
kubernetes.namespace: myapp #Set the namespace in which your app is running, can add multiple conditions in case of more than 1 namespace.
config:
- type: docker
containers.ids:
- "${data.kubernetes.container.id}"
multiline:
pattern: '^[A-Za-z ]+[0-9]{2} (?:[01]\d|2[0123]):(?:[012345]\d):(?:[012345]\d)'. #Timestamp regex for the app logs. Change it as per format.
negate: true
match: after
- condition:
equals:
kubernetes.namespace: elasticsearch
config:
- type: docker
containers.ids:
- "${data.kubernetes.container.id}"
multiline:
pattern: '^\[[0-9]{4}-[0-9]{2}-[0-9]{2}|^[0-9]{4}-[0-9]{2}-[0-9]{2}T'
negate: true
match: after
processors:
- add_cloud_metadata: ~
- drop_fields:
when:
has_fields: ['kubernetes.labels.app']
fields:
- 'kubernetes.labels.app'
output.elasticsearch:
hosts: ['${ELASTICSEARCH_HOST:elasticsearch}:${ELASTICSEARCH_PORT:9200}']
关于 Filebeat ConfigMap 需要了解以下几个重要概念:
hints.enabled: 这将激活 Filebeat 的 Kubernetes 的 hint 模块。通过使用这个,我们可以使用 pod 注释直接将 config 传递给 Filebeat pod。我们可以指定不同的多行模式和其他各种类型的配置。更多关于这方面的内容可以访问以下链接:
https://www.elastic.co/guide/en/beats/filebeat/current/configuration-autodiscover-hints.html
include_annotations: 将此设置为 “true”,可以让 Filebeat 保留特定日志条目的任何 pod 注释。这些注释可以在以后用于在 Kibana 控制台中过滤日志。
include_labels: 将此设置为 “true”,可以让 Filebeat 保留特定日志条目的任何 pod 标签,这些标签以后可以用于在 Kibana 控制台中过滤日志。
我们还可以针对特定的命名空间过滤日志,然后可以对日志条目进行相应的处理。这里使用的是 docker 日志处理器。我们也可以针对不同的命名空间使用不同的多行模式。
因为我们使用 Elasticsearch 作为存储后端,所以输出设置为 Elasticsearch。另外,这也可以指向 Redis、Logstash、Kafka 甚至是一个 File。更多信息可以查看以下链接:
https://www.elastic.co/guide/en/beats/filebeat/current/configuring-output.html
云元数据处理器在日志条目中包含一些特定主机的字段。当我们试图过滤特定 worker 节点的日志时,这很有帮助。
部署 Filebeat DaemonSet
使用以下 manifest 来部署 Filebeat DaemonSet:
apiVersion: v1
kind: Namespace
metadata:
name: logging
---
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
name: filebeat
namespace: logging
labels:
k8s-app: filebeat
spec:
template:
metadata:
labels:
k8s-app: filebeat
spec:
serviceAccountName: filebeat
terminationGracePeriodSeconds: 30
tolerations:
- effect: NoSchedule
key: node-role.kubernetes.io/master
containers:
- name: filebeat
image: elastic/filebeat:6.5.4
args: [
"-c", "/usr/share/filebeat/filebeat.yml",
"-e",
]
env:
- name: ELASTICSEARCH_HOST
value: elasticsearch.elasticsearch
- name: ELASTICSEARCH_PORT
value: "9200"
securityContext:
runAsUser: 0
# If using Red Hat OpenShift uncomment this:
#privileged: true
resources:
limits:
memory: 200Mi
requests:
cpu: 100m
memory: 100Mi
volumeMounts:
- name: config
mountPath: /usr/share/filebeat/filebeat.yml
readOnly: true
subPath: filebeat.yml
- name: inputs
mountPath: /usr/share/filebeat/inputs.d
readOnly: true
- name: data
mountPath: /usr/share/filebeat/data
- name: varlibdockercontainers
mountPath: /var/lib/docker/containers
readOnly: true
volumes:
- name: config
configMap:
defaultMode: 0600
name: filebeat-config
- name: varlibdockercontainers
hostPath:
path: /var/lib/docker/containers
- name: inputs
configMap:
defaultMode: 0600
name: filebeat-inputs
# data folder stores a registry of read status for all files, so we don't send everything again on a Filebeat pod restart
- name: data
hostPath:
path: /var/lib/filebeat-data
type: DirectoryOrCreate
---
让我们来看看这里发生了什么:
每个 pod 的日志都被写入 /var/log/docker/containers 。我们将这个目录从主机挂载到 Filebeat pod 上,然后 Filebeat 根据提供的配置处理日志。
我们将环境变量 ELASTICSEARCH_HOST 设置为 elasticsearch.elasticsearch,以引用本教程第一部分创建的 Elasticsearch 客户端服务。如果你已经有一个 Elasticsearch 集群在运行,环境变量应该设置为指向它。
请注意 manifest 中的以下设置:
...
tolerations:
- effect: NoSchedule
key: node-role.kubernetes.io/master
...
这确保了我们的 Filebeat DaemonSet 也会在 master 节点上调度一个 pod。一旦部署了 Filebeat DaemonSet,我们就可以检查我们的 pod 是否被正确调度。
root$ kubectl -n logging get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
filebeat-4kchs 1/1 Running 0 6d 100.96.8.2 ip-10-10-30-206.us-east-2.compute.internal <none> <none>
filebeat-6nrpc 1/1 Running 0 6d 100.96.7.6 ip-10-10-29-252.us-east-2.compute.internal <none> <none>
filebeat-7qs2s 1/1 Running 0 6d 100.96.1.6 ip-10-10-30-161.us-east-2.compute.internal <none> <none>
filebeat-j5xz6 1/1 Running 0 6d 100.96.5.3 ip-10-10-28-186.us-east-2.compute.internal <none> <none>
filebeat-pskg5 1/1 Running 0 6d 100.96.64.4 ip-10-10-29-142.us-east-2.compute.internal <none> <none>
filebeat-vjdtg 1/1 Running 0 6d 100.96.65.3 ip-10-10-30-118.us-east-2.compute.internal <none> <none>
filebeat-wm24j 1/1 Running 0 6d 100.96.0.4 ip-10-10-28-162.us-east-2.compute.internal <none> <none>
root$ kubectl -get nodes -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
ip-10-10-28-162.us-east-2.compute.internal Ready master 6d v1.14.8 10.10.28.162 <none> Debian GNU/Linux 9 (stretch) 4.9.0-9-amd64 docker://18.6.3
ip-10-10-28-186.us-east-2.compute.internal Ready node 6d v1.14.8 10.10.28.186 <none> Debian GNU/Linux 9 (stretch) 4.9.0-9-amd64 docker://18.6.3
ip-10-10-29-142.us-east-2.compute.internal Ready master 6d v1.14.8 10.10.29.142 <none> Debian GNU/Linux 9 (stretch) 4.9.0-9-amd64 docker://18.6.3
ip-10-10-29-252.us-east-2.compute.internal Ready node 6d v1.14.8 10.10.29.252 <none> Debian GNU/Linux 9 (stretch) 4.9.0-9-amd64 docker://18.6.3
ip-10-10-30-118.us-east-2.compute.internal Ready master 6d v1.14.8 10.10.30.118 <none> Debian GNU/Linux 9 (stretch) 4.9.0-9-amd64 docker://18.6.3
ip-10-10-30-161.us-east-2.compute.internal Ready node 6d v1.14.8 10.10.30.161 <none> Debian GNU/Linux 9 (stretch) 4.9.0-9-amd64 docker://18.6.3
ip-10-10-30-206.us-east-2.compute.internal Ready node 6d v1.14.8 10.10.30.206
如果我们跟踪其中一个 pod 的日志,我们可以清楚地看到它连接到 Elasticsearch,并且已经启动了文件的 harvester。以下代码段可以看到这一点:
2019-11-19T06:22:03.435Z INFO log/input.go:138 Configured paths: [/var/lib/docker/containers/c2b29f5e06eb8affb2cce7cf2501f6f824a2fd83418d09823faf4e74a5a51eb7/*.log]
2019-11-19T06:22:03.435Z INFO input/input.go:114 Starting input of type: docker; ID: 4134444498769889169
2019-11-19T06:22:04.786Z INFO input/input.go:149 input ticker stopped
2019-11-19T06:22:04.786Z INFO input/input.go:167 Stopping Input: 4134444498769889169
2019-11-19T06:22:19.295Z INFO [monitoring] log/log.go:144 Non-zero metrics in the last 30s {"monitoring": {"metrics": {"beat":{"cpu":{"system":{"ticks":641680,"time":{"ms":16}},"total":{"ticks":2471920,"time":{"ms":180},"value":2471920},"user":{"ticks":1830240,"time":{"ms":164}}},"handles":{"limit":{"hard":1048576,"soft":1048576},"open":20},"info":{"ephemeral_id":"007e8090-7c62-4b44-97fb-e74e8177dc54","uptime":{"ms":549390018}},"memstats":{"gc_next":47281968,"memory_alloc":29021760,"memory_total":156062982472}},"filebeat":{"events":{"added":111,"done":111},"harvester":{"closed":2,"open_files":15,"running":13}},"libbeat":{"config":{"module":{"running":0}},"output":{"events":{"acked":108,"batches":15,"total":108},"read":{"bytes":69},"write":{"bytes":123536}},"pipeline":{"clients":1847,"events":{"active":0,"filtered":3,"published":108,"total":111},"queue":{"acked":108}}},"registrar":{"states":{"current":87,"update":111},"writes":{"success":18,"total":18}},"system":{"load":{"1":0.98,"15":1.71,"5":1.59,"norm":{"1":0.0613,"15":0.1069,"5":0.0994}}}}}}
2019-11-19T06:22:49.295Z INFO [monitoring] log/log.go:144 Non-zero metrics in the last 30s {"monitoring": {"metrics": {"beat":{"cpu":{"system":{"ticks":641720,"time":{"ms":44}},"total":{"ticks":2472030,"time":{"ms":116},"value":2472030},"user":{"ticks":1830310,"time":{"ms":72}}},"handles":{"limit":{"hard":1048576,"soft":1048576},"open":20},"info":{"ephemeral_id":"007e8090-7c62-4b44-97fb-e74e8177dc54","uptime":{"ms":549420018}},"memstats":{"gc_next":47281968,"memory_alloc":38715472,"memory_total":156072676184}},"filebeat":{"events":{"active":12,"added":218,"done":206},"harvester":{"open_files":15,"running":13}},"libbeat":{"config":{"module":{"running":0}},"output":{"events":{"acked":206,"batches":24,"total":206},"read":{"bytes":102},"write":{"bytes":269666}},"pipeline":{"clients":1847,"events":{"active":12,"published":218,"total":218},"queue":{"acked":206}}},"registrar":{"states":{"current":87,"update":206},"writes":{"success":24,"total":24}},"system":{"load":{"1":1.22,"15":1.7,"5":1.58,"norm":{"1":0.0763,"15":0.1063,"5":0.0988}}}}}}
2019-11-19T06:23:19.295Z INFO [monitoring] log/log.go:144 Non-zero metrics in the last 30s {"monitoring": {"metrics": {"beat":{"cpu":{"system":{"ticks":641750,"time":{"ms":28}},"total":{"ticks":2472110,"time":{"ms":72},"value":2472110},"user":{"ticks":1830360,"time":{"ms":44}}},"handles":{"limit":{"hard":1048576,"soft":1048576},"open":20},"info":{"ephemeral_id":"007e8090-7c62-4b44-97fb-e74e8177dc54","uptime":{"ms":549450017}},"memstats":{"gc_next":47281968,"memory_alloc":43140256,"memory_total":156077100968}},"filebeat":{"events":{"active":-12,"added":43,"done":55},"harvester":{"open_files":15,"running":13}},"libbeat":{"config":{"module":{"running":0}},"output":{"events":{"acked":55,"batches":12,"total":55},"read":{"bytes":51},"write":{"bytes":70798}},"pipeline":{"clients":1847,"events":{"active":0,"published":43,"total":43},"queue":{"acked":55}}},"registrar":{"states":{"current":87,"update":55},"writes":{"success":12,"total":12}},"system":{"load":{"1":0.99,"15":1.67,"5":1.49,"norm":{"1":0.0619,"15":0.1044,"5":0.0931}}}}}}
2019-11-19T06:23:25.261Z INFO log/harvester.go:255 Harvester started for file: /var/lib/docker/containers/ccb7dc75ecc755734f6befc4965b9fdae74d59810914101eadf63daa69eb62e2/ccb7dc75ecc755734f6befc4965b9fdae74d59810914101eadf63daa69eb62e2-json.log
2019-11-19T06:23:49.295Z INFO [monitoring] log/log.go:144 Non-zero metrics in the last 30s {"monitoring": {"metrics": {"beat":{"cpu":{"system":{"ticks":641780,"time":{"ms":28}},"total":{"ticks":2472310,"time":{"ms":196},"value":2472310},"user":{"ticks":1830530,"time":{"ms":168}}},"handles":{"limit":{"hard":1048576,"soft":1048576},"open":21},"info":{"ephemeral_id":"007e8090-7c62-4b44-97fb-e74e8177dc54","uptime":{"ms":549480018}},"memstats":{"gc_next":47789200,"memory_alloc":31372376,"memory_total":156086697176,"rss":-1064960}},"filebeat":{"events":{"active":16,"added":170,"done":154},"harvester":{"open_files":16,"running":14,"started":1}},"libbeat":{"config":{"module":{"running":0}},"output":{"events":{"acked":153,"batches":24,"total":153},"read":{"bytes":115},"write":{"bytes":207569}},"pipeline":{"clients":1847,"events":{"active":16,"filtered":1,"published":169,"total":170},"queue":{"acked":153}}},"registrar":{"states":{"current":87,"update":154},"writes":{"success":25,"total":25}},"system":{"load":{"1":0.87,"15":1.63,"5":1.41,"norm":{"1":0.0544,"15":0.1019,"5":0.0881}}}}}}
一旦我们运行了所有的 pods,那么我们就可以在 Kibana 中创建一个 filebeat-* 类型的索引模式。Filebeat 索引一般都是有时间戳的。只要我们创建了索引模式,就可以看到所有可搜索的可用字段,并导入。最后,我们可以搜索我们的应用程序日志,并在需要时创建 dashboard。强烈建议在我们的应用程序中使用 JSON logger,因为它使日志处理变得非常容易,并且可以轻松地解析消息。
总结
我们的日志堆栈方案配置到此结束。所有提供的配置文件都已经在生产环境中进行了测试,并且是可以随时部署的。欢迎实践,也十分欢迎你向我们分享关于 Kubernetes 的各种实践。
原文链接:
本文转载自公众号 RancherLabs(ID:RancherLabs)。
原文链接:
更多内容推荐
Kylin on Kubernetes 在 eBay 的实践
目前 Kylin on Kubernetes 已贡献至社区,欢迎多多尝试,一起优化。
20|应用监控:如何使用日志来监控应用?
使用日志对应用监控
2023-02-22
Kubernetes 1.2 新功能解析:ConfigMap (中)
很多应用程序的配置需要通过配置文件,命令行参数和环境变量的组合配置来完成。
18|组件监控:Kubernetes 控制面组件的关键指标与数据采集
Kubernetes 控制面组件的关键指标与数据采集
2023-02-17
超长可视化指南!你必须了解的 Kubernetes 部署的调试思路
本文将帮助你厘清在Kubernetes中调试 deployment的思路。
运行在 Kubernetes 上的应用程序的 Java 远程调试
Kubernetes1.2版本最近刚发布就立马成为容器(Docker,Rocket
入门指南丨上手理解 Deployment、Services 和 Ingress
在本文中,我们将使用NGINX作为Ingress Controller和Azure容器镜像仓库来存储我们的自定义Docker镜像。
Kubernetes 1.6 即将到来
去年 12 月份 Kubernetes 1.5 发布,它支持 windows 服务器容器的特点吸引了众多企业、个人开发者的眼球,社区内异常活跃。
Fluented,Kubernetes 和谷歌云平台,处理日志流的解决方案
也许你对Fluentd的统一日志记录层已经有所耳闻。
使用 AWS FireLens 轻松实现 AWS Fargate 容器日志处理
AWS 容器服务团队在 2019 年 11 月推出了一款新工具 FireLens,该工具可以让您更轻松的处理容器日志。
Kubernetes 的 11 大基本概念及重要概念性理解
本文来自RancherLabs微信公众号
Kubernetes 投入生产的 3 年,我们得到的一些经验教训
你是否一定需要Kubernetes?
Argo CD 使用指南:如何构建一套完整的 GitOps?
随着Kubernetes将自己确立为容器编排的行业标准,为你的应用和工具找到使用声明式模型的有效方法是成功的关键。
基于 vmware16 和 ubuntu20.04, 搭建单节点 kubernetes 1.22.2
2023-09-25
31|日志:如何搭建轻量云原生业务日志系统?
这节课,我们来学习如何构建可观测性的日志体系,并通过 Grafana Loki 来搭建轻量的日志系统。
2023-02-17
无需 kubectl!快速使用 Prometheus 监控 Etcd
在本文中,我们将安装一个Etcd集群并使用Prometheus和Grafana配置监控,以上这些操作我们都通过Rancher进行。
超长可视化指南!你必须了解的 K8S 部署的 debug 思路
本文来自RancherLabs微信公众号
17|组件监控:Kubernetes Node 组件的关键指标与数据采集
Kubernetes Node组件的关键指标与数据采集
2023-02-15
24|从集群角度拆解 RocketMQ 的架构设计与实现
解析 RocketMQ 在集群构建、部署形态、数据可靠性、安全控制、可观测性等五个方面的设计实现。
2023-08-14
Kubernetes 集群日志搜集
使用ELK实时搜集Kubernetes日志
2022-01-03
推荐阅读
电子书
大厂实战PPT下载
换一换 李亚飞 | 深圳至简天成科技 CEO兼首席架构师
裴明明 | 网易 资深云原生架构师
朱明亮 | 网易云信 音视频引擎开发专家
评论