Docker 安装ELK,采集H3C防火墙syslog日志

一、安装ES

1、拉取 全部镜像

docker pull elasticsearch:7.5.1
docker pull logstash:7.5.1
docker pull kibana:7.5.1
docker pull mobz/elasticsearch-head:5-alpine

2、调整参数

vi /etc/sysctl.conf
vm.max_map_count=262144

3、刷新参数

sysctl -p

4、临时启动ES

docker run -d \
--name=elasticsearch \
-p 9200:9200 -p 9300:9300 \
-e "cluster.name=elasticsearch" \
-e "discovery.type=single-node" \
-e "ES_JAVA_OPTS=-Xms512m -Xmx1024m" \
elasticsearch:7.5.1

5、拷贝文件

mkdir -p /data/elk7
docker cp elasticsearch:/usr/share/elasticsearch/data /data/elk7/elasticsearch/
docker cp elasticsearch:/usr/share/elasticsearch/logs /data/elk7/elasticsearch/
docker cp elasticsearch:/usr/share/elasticsearch/config /data/elk7/elasticsearch/
chmod 777 -R /data/elk7/elasticsearch/

6、编辑配置文件

vi /data/elk7/elasticsearch/config/elasticsearch .yml

内容如下:
cluster.name: "docker-cluster"
network.host: 0.0.0.0
http.cors.enabled: true
http.cors.allow-origin: "*"
注意:最后2行一定要添加,否则head插件连接时,会出现跨域拒绝访问。

7、启动elasticsearch

docker rm -f elasticsearch
   //先删除临时的
docker run -d \
--name=elasticsearch \
--restart=always \
-p 9200:9200 \
-p 9300:9300 \
-e "cluster.name=elasticsearch" \
-e "discovery.type=single-node" \
-e "ES_JAVA_OPTS=-Xms512m -Xmx1024m" \
-v /data/elk7/elasticsearch/config:/usr/share/elasticsearch/config \
-v /data/elk7/elasticsearch/data:/usr/share/elasticsearch/data \
-v /data/elk7/elasticsearch/logs:/usr/share/elasticsearch/logs \
elasticsearch:7.5.1
成功后访问: http://192.168.25.221:9200

二、启动elasticsearch head插件

docker run -d \
--name=elasticsearch-head \
--restart=always \
-p 9100:9100 \
docker.io/mobz/elasticsearch-head:5-alpine
访问:

三、logstash配置安装

1、启动logstash

docker run -d --name=logstash logstash:7.5.1

查看日志

docker logs -f logstash

如成功:Successfully started Logstash API endpoint {:port=>9600}

2、拷贝数据,授予权限

docker cp logstash:/usr/share/logstash /data/elk7/
mkdir /data/elk7/logstash/config/conf.d
chmod 777 -R /data/elk7/logstash

3、修改配置文件中的elasticsearch地址

vi /data/elk7/logstash/config/logstash.yml
完整内容如下:
http.host: "0.0.0.0"
xpack.monitoring.elasticsearch.hosts: [ "http://192.168.25.221:9200" ]
path.config: /usr/share/logstash/config/conf.d/*.conf
path.logs: /usr/share/logstash/logs

4、Logstash配置文件修改

新建文件syslog.conf,用来收集/var/log/messages
vi /data/elk7/logstash/config/conf.d/syslog.conf
input {
file {
#标签
type => "systemlog-localhost"
#采集点
path => "/var/log/messages"
#开始收集点
start_position => "beginning"
#扫描间隔时间,默认是1s,建议5s
stat_interval => "5"
}
}
output {
elasticsearch {
hosts => ["192.168.25.221:9200"]
index => "logstash-system-localhost-%{+YYYY.MM.dd}"
}
}

5、设置日志文件读取权限

chmod 644 /var/log/messages

6、logstash启动

删除容器:
docker rm -f logstash
启动容器:
docker run -d \
--name=logstash \
--restart=always \
-p 5044:5044 \
-v /data/elk7/logstash:/usr/share/logstash \
-v /var/log/syslog/172.16.2.3:/var/log/syslog/172.16.2.3 \
logstash:7.5.1

四、kibana配置

1、创建配置文件

mkdir /data/elk7/kibana/

vi /data/elk7/kibana/kibana.yml

#Default Kibana configuration for docker target
server.name: kibana
server.host: "0"
elasticsearch.hosts: ["http://192.168.25.221:9200"]
xpack.monitoring.ui.container.elasticsearch.enabled: true

2、启动

docker run -d --restart=always --log-driver json-file --log-opt max-size=100m --log-opt max-file=2 --name kibana -p 5601:5601 -v /data/elk7/kibana/kibana.yml:/usr/share/kibana/config/kibana.yml kibana:7.5.1
创建索引:

五、rsyslog配置

1、rsyslog.conf配置

把以下行的注释取掉
 $ModLoad imudp
 $UDPServerRun 514

*.* @@172.16.2.3:514

$template IpTemplate,"/var/log/syslog/%FROMHOST-IP%/%$year%-%$month%-%$day%.log.log"
*.* ?IpTemplate

//最后2行为日志路径 /var/log/syslog/172.16.2.3,,每天一个文件

2、启动服务

systemctl restart rsyslog
netstat -antupl |grep rsyslog

3、查看日志