当前位置: 首页 > news >正文

【ELK】分布式日志平台搭建全攻略 - 详解

>ELK 是一个由 Elasticsearch、Logstash 和 Kibana 组成的开源日志收集、存储、搜索和可视化分析平台。

目录

一、环境准备

1.1 创建目录

1.2 创建配置文件

二、系统集成

2.1 FileBeat

2.2 项目集成

2.3 日志查看


一、环境准备

1.1 创建目录

elk架构图

创建目录结构

mkdir -p /opt/elk/{elasticsearch/{data,logs,plugins,config},logstash/{config,pipeline},kibana/config,filebeat/{config,data}}

设置权限

chomd -R 777 elasticsearch

chmod -R 777 logstash

chmod -R 777 kibana

chmod -R 777 filebeat

1.2 创建配置文件

Logstash配置:

vim logstash/config/logstash.yml

http.host: "0.0.0.0"
xpack.monitoring.enabled: true
xpack.monitoring.elasticsearch.hosts: ["http://elasticsearch:9200"]
xpack.monitoring.elasticsearch.username: "elastic"
xpack.monitoring.elasticsearch.password: "mH0awV4RrkN2"
# 日志级别
log.level: info

vim logstash/pipeline/logstash.conf

input {beats {port => 5045ssl => false}tcp {port => 5044codec => json}
}
filter {# RuoYi 应用日志(优先级最高)if [app_name] {mutate {add_field => { "[@metadata][target_index]" => "ruoyi-logs-%{+YYYY.MM.dd}" }}}# 系统日志else if [fields][log_type] == "system" {grok {match => { "message" => "%{SYSLOGLINE}" }}date {match => [ "timestamp", "MMM  d HH:mm:ss", "MMM dd HH:mm:ss" ]target => "@timestamp"}mutate {add_field => { "[@metadata][target_index]" => "system-log-%{+YYYY.MM.dd}" }}}# Docker 容器日志else if [container] {# 尝试解析 JSON 消息if [message] =~ /^\{.*\}$/ {json {source => "message"skip_on_invalid_json => true}}mutate {add_field => { "[@metadata][target_index]" => "docker-log-%{+YYYY.MM.dd}" }}}# 其他未分类日志else {mutate {add_field => { "[@metadata][target_index]" => "logstash-%{+YYYY.MM.dd}" }}}# 清理不需要的字段mutate {remove_field => ["agent", "ecs", "input"]}
}
output {elasticsearch {hosts => ["elasticsearch:9200"]user => "elastic"password => "mH0awV4RrkN2"index => "%{[@metadata][target_index]}"}# 调试输出(生产环境建议关闭)# stdout {#   codec => rubydebug# }
}

Kibana配置

vim kibana/config/kibana.yml

server.name: kibana
server.host: "0.0.0.0"
server.port: 5601
elasticsearch.hosts: ["http://elasticsearch:9200"]
# 中文界面
i18n.locale: "zh-CN"
# 监控配置
monitoring.ui.container.elasticsearch.enabled: true

filebeat配置

这里我加上了系统和docker的运行日志,是为了给读者扩展的,各位读者可以参考修改,让elk不仅是只会接受服务的日志,还能接受nginx日志,mysql慢日志等。

vim /filebeat/config/filebeat.yml

filebeat.inputs:# 收集系统日志- type: logenabled: truepaths:- /var/log/messages- /var/log/syslogtags: ["system"]fields:log_type: system# 收集 Docker 容器日志- type: containerenabled: truepaths:- '/var/lib/docker/containers/*/*.log'processors:- add_docker_metadata:host: "unix:///var/run/docker.sock"
# 输出到 Logstash
output.logstash:hosts: ["logstash:5045"]
# 或者直接输出到 ES(二选一)
#output.elasticsearch:
#  hosts: ["elasticsearch:9200"]
#  username: "elastic"
#  password: "mH0awV4RrkN2"
#  index: "filebeat-%{+yyyy.MM.dd}"
# Kibana 配置
setup.kibana:host: "kibana:5601"username: "elastic"password: "mH0awV4RrkN2"
# 日志级别
logging.level: info
logging.to_files: true
logging.files:path: /usr/share/filebeat/logsname: filebeatkeepfiles: 7permissions: 0644
# 启用监控
monitoring.enabled: true
monitoring.elasticsearch:hosts: ["elasticsearch:9200"]username: "elastic"password: "mH0awV4RrkN2"filebeat.inputs:# 收集系统日志- type: logenabled: truepaths:- /var/log/messages- /var/log/syslogtags: ["system"]fields:log_type: system# 收集 Docker 容器日志- type: containerenabled: truepaths:- '/var/lib/docker/containers/*/*.log'processors:- add_docker_metadata:host: "unix:///var/run/docker.sock"
# 输出到 Logstash
output.logstash:hosts: ["logstash:5045"]
# 或者直接输出到 ES(二选一)
#output.elasticsearch:
#  hosts: ["elasticsearch:9200"]
#  username: "elastic"
#  password: "mH0awV4RrkN2"
#  index: "filebeat-%{+yyyy.MM.dd}"
# Kibana 配置
setup.kibana:host: "kibana:5601"username: "elastic"password: "mH0awV4RrkN2"
# 日志级别
logging.level: info
logging.to_files: true
logging.files:path: /usr/share/filebeat/logsname: filebeatkeepfiles: 7permissions: 0644
# 启用监控
monitoring.enabled: true
monitoring.elasticsearch:hosts: ["elasticsearch:9200"]username: "elastic"password: "mH0awV4RrkN2"

compose配置

vim docker-compose.yml

version: '3.8'
services:elasticsearch:image: docker.elastic.co/elasticsearch/elasticsearch:7.12.1container_name: elasticsearchenvironment:- node.name=es-node-1- cluster.name=elk-cluster- discovery.type=single-node- bootstrap.memory_lock=true- "ES_JAVA_OPTS=-Xms512M -Xmx1g"- http.cors.enabled=true- http.cors.allow-origin=*# === 安全认证配置 ===- xpack.security.enabled=true- xpack.security.http.ssl.enabled=false  # 禁用 HTTP SSL- xpack.security.transport.ssl.enabled=false  # 禁用内部通信 SSL# 设置 elastic 用户的初始密码(重要!)- ELASTIC_PASSWORD=mH0awV4RrkN2ulimits:memlock:soft: -1hard: -1nofile:soft: 65536hard: 65536volumes:- ./elasticsearch/data:/usr/share/elasticsearch/data- ./elasticsearch/logs:/usr/share/elasticsearch/logs- ./elasticsearch/plugins:/usr/share/elasticsearch/pluginsports:- "9200:9200"- "9300:9300"networks:- elk-networkrestart: unless-stoppedhealthcheck:# 健康检查需要认证test: ["CMD-SHELL", "curl -u elastic:mH0awV4RrkN2 -f http://localhost:9200/_cluster/health || exit 1"]interval: 30stimeout: 10sretries: 5logstash:image: docker.elastic.co/logstash/logstash:7.12.1container_name: logstashenvironment:- "LS_JAVA_OPTS=-Xms512m -Xmx512m"# 使用 elastic 超级用户(后续可改为 logstash_system)- ELASTICSEARCH_USERNAME=elastic- ELASTICSEARCH_PASSWORD=mH0awV4RrkN2volumes:- ./logstash/config/logstash.yml:/usr/share/logstash/config/logstash.yml- ./logstash/pipeline:/usr/share/logstash/pipelineports:- "5044:5044"  # TCP输入- "5045:5045"  # Beats输入- "9600:9600"  # Logstash APInetworks:- elk-networkdepends_on:elasticsearch:condition: service_healthyrestart: unless-stoppedkibana:image: docker.elastic.co/kibana/kibana:7.12.1container_name: kibanaenvironment:- ELASTICSEARCH_HOSTS=http://elasticsearch:9200- I18N_LOCALE=zh-CN# 使用 elastic 超级用户(后续可改为 kibana_system)- ELASTICSEARCH_USERNAME=elastic- ELASTICSEARCH_PASSWORD=mH0awV4RrkN2volumes:- ./kibana/config/kibana.yml:/usr/share/kibana/config/kibana.ymlports:- "5601:5601"networks:- elk-networkdepends_on:elasticsearch:condition: service_healthyrestart: unless-stoppedhealthcheck:test: ["CMD-SHELL", "curl -f http://localhost:5601/api/status || exit 1"]interval: 30stimeout: 10sretries: 5filebeat:image: docker.elastic.co/beats/filebeat:7.12.1container_name: filebeatuser: rootvolumes:- ./filebeat/config/filebeat.yml:/usr/share/filebeat/filebeat.yml:ro- ./filebeat/data:/usr/share/filebeat/data# 挂载宿主机日志目录(根据实际需求调整)- /var/log:/var/log:ro# 如果需要收集 Docker 容器日志- /var/lib/docker/containers:/var/lib/docker/containers:ro- /var/run/docker.sock:/var/run/docker.sock:rocommand: filebeat -e -strict.perms=falsenetworks:- elk-networkdepends_on:- elasticsearch- logstashrestart: unless-stopped
networks:elk-network:driver: bridge
volumes:elasticsearch-data:driver: local

启动所有服务

docker-compose up -d

浏览器访问

http://127/.0.0.1:5601

二、系统集成

2.1 FileBeat

规范的流程是先通过filebeat给logstash插入到es,但是笔者实在不想折腾了,我这里就直接省去filebeat这一流程。

正常部署方式为:在每台应用服务器上安装Filebeat,配置相应的日志收集路径,指向中心化的ELK服务器地址,启动Filebeat服务

2.2 项目集成

依赖引入

net.logstash.logbacklogstash-logback-encoder7.2

本地日志文件配置,在resouces目录下创建logback-elk.xml


${log.pattern}${log.path}/info.log${log.path}/info.%d{yyyy-MM-dd}.log60${log.pattern}INFOACCEPTDENY${log.path}/error.log${log.path}/error.%d{yyyy-MM-dd}.log60${log.pattern}ERRORACCEPTDENY${LOGSTASH_HOST}50001000163845000{"app_name":"${APP_NAME}"}truetruefalse5 minutes0512

bootstrap.yml配置

2.3 日志查看

访问kibana地址,输入我们配置的用户名和账号,打开索引模式。

点击创建索引模式

输入:ruoyi-logs-*后点击下一步

时间字段选择如下:

这样我们的索引就创建完成

接下来打开左侧菜单栏的Discover页面

后记:目前的文档没有结合kafaka集群,因为笔者不想再折腾了;如果读者知道如何结合mq或kafaka可以在评论区分享一下。


http://www.jsqmd.com/news/149808/

相关文章:

  • 物联网毕设 stm32的火灾监控与可视化系统(源码+硬件+论文)
  • TensorFlow Quantum初探:量子机器学习前沿
  • ACL自然语言顶会中TensorFlow的应用比例
  • 解码GPIO到核心元件的原理与应用
  • TensorArray使用指南:循环神经网络底层控制
  • MXNet停止维护后用户转向TensorFlow趋势观察
  • TabNet复现:可解释性表格模型TensorFlow实现
  • AI智能体开发框架LangChain LangGraph快速入门实战(包含LangSmith)
  • Temporal Fusion Transformer:时间序列预测新范式
  • python基于大数据的老旧小区改造需求评估与分析系统(带大屏)_lo2w4579
  • ICML 2024接受论文中TensorFlow相关研究盘点
  • python基于大数据的专业智能导学系统的设计与实现_ao54o8z4
  • python客户股票交易教学系统的设计与实现_29641451
  • DVCLive轻量级TensorFlow训练监控工具
  • Memory Timeline分析:优化GPU显存占用
  • python工程项目任务分配管理系统_q6ij795l
  • scroll-view分页加载
  • python建筑工程项目管理系统设计与实现_95ig3zyt
  • ClearML自动化TensorFlow超参搜索流程
  • python教学管理自动化系统设计与实现 大学课程课表管理系统_54r67p9b
  • 知识蒸馏实战:Teacher-Student模型训练流程
  • 篮球计时器FPGA设计:Verilog语言实现
  • 数据迁移与ETL流程的测试验证框架
  • ms08-067漏洞复现:漏洞原理+环境搭建+渗透实战(CVE-2008-4250) - 详解
  • 探索C#运控框架:基于雷赛DMC系列的运动控制项目
  • Huggingface 214页训练手册:揭露构建世界级大模型的秘密
  • 梯度裁剪(Gradient Clipping)策略选择指南
  • 针刺热失控硬壳三元镍钴锰酸锂电池NCM系列的Comsol模拟计算探索
  • 探索基于模型预测算法的含储能微网双层能量管理模型
  • 学长亲荐8个AI论文软件,助你搞定本科生毕业论文!