Filebeat--->ELasticSearch,一般不推荐这么操作,因为filebeat的客户端很多,长时间对es进行tcp链接的话,会导致es的压力很大,当然如果客户端比较少是可以的。那那么就来试一试。
## 官网
https://www.elastic.co/guide/en/beats/filebeat/7.17/index.html
我们用filebeat.yml的配置如下:
filebeat.config.inputs:
enabled: true
path: conf.d/*.yml
##输出到elastic
output.elasticsearch:
hosts: ["zz.cn:9200"]
#index: "testtest"
setup.template.name: "testtest"
setup.template.pattern: "testtest-*"
#日志配置
logging.level: info
conf.d下的内容如下:
- type: log
paths:
- /home/data/logs/*/*.access.logstash_json
fields_under_root: true
json.overwrite_keys: true
ignore_older: 5m
scan_frequency: 1s
fields:
type: access
processors:
- drop_fields:
fields: ['input','offset','prospector']
从上面的配置中我们可以看到,数据会送到ES里去,但是只会送到ES的filebeat-*的索引里去,这显然不是我们想要的。
要想送到指定索引,我们对filebeat.yml的更改配置如下,其他不变:
filebeat.config.inputs:
enabled: true
path: conf.d/*.yml
##
filebeat.config.modules:
path: ${path.config}/modules.d/*.yml
reload.enabled: false
##输出到elastic
output.elasticsearch:
hosts: ["zz.cn:9200"]
index: "testtest"
setup.template.name: "testtest"
setup.template.pattern: "testtest-*"
#日志配置
logging.level: info
然后运行FIlebeat结果如下:
[www@me03-db-master filebeat-7.3.0-linux-x86_64]$ ./filebeat
Exiting: setup.template.name and setup.template.pattern have to be set if index name is modified
显然不行。肯定是配置问题,所以我们看看看官方的配置,然后修改如下(把setup...顶格对齐):
filebeat.config.inputs:
enabled: true
path: conf.d/*.yml
##
filebeat.config.modules:
path: ${path.config}/modules.d/*.yml
reload.enabled: false
##
setup.template.enabled: false
setup.template.name: "xxoo"
setup.template.pattern: "xxoo-*"
##输出到elastic
output.elasticsearch:
hosts: ["zz.cn:9200"]
index: "xxoo-%{+yyyy.MM.dd}"
#日志配置
logging.level: info
到现在我们可以正常启动了:
$ ./filebeat -c filebeat.yml
filebeat.yml 配置如下:
filebeat.config.inputs:
enabled: true
path: conf.d/*.yml
##输出到elastic,腾讯云测试环境es集群
output.elasticsearch:
hosts: ["172.16.32.42:9200"]
username: "elastic"
password: "xxxxxx"
indices:
- index: "mes-data-api.xx.com-%{[type]}-%{+yyyy.MM.dd}"
when.contains:
type: "access"
- index: "mes-data-api-daemon-%{+yyyy.MM.dd}"
when.contains:
type: "daemon"
#日志配置
logging.level: info
daemom.yml 的内容如下:
- type: log
paths:
- /home/ubuntu/xx-production-test/logs.log
fields:
type: daemon
fields_under_root: true
tail_files: false # 全部读取
ignore_older: 5m
multiline.type: pattern
multiline.pattern: '^ecoflow-'
multiline.negate: true
multiline.match: after
processors: # 过滤掉不用的字符串
- drop_fields:
fields: ['input','offset','prospector']
access.yml 的内容如下:
- type: log
paths:
- /home/data/logs/mes-data-api.ecoflow.com/*.access.logstash_json
fields_under_root: true
json.overwrite_keys: true
ignore_older: 5m
scan_frequency: 1s
fields:
type: access
processors:
- drop_fields:
fields: ['input','offset','prospector']