ELK部署记录

Kafka

首先安装JDK环境,然后安装Kafka并创建topic logstash

官方下载:http://kafka.apache.org/downloads

# bin/kafka-topics.sh --create --zookeeper 192.168.165.243:2181 --replication-factor 1 --partitions 1 --topic logstash

ElasticSearch

官方下载:https://www.elastic.co/downloads/elasticsearch

# tar -zxf elasticsearch-7.1.0-linux-x86_64.tar.gz -C /data/server/
# mv /data/server/elasticsearch-7.1.0 /data/server/elasticsearch
# vim config/elasticsearch.yml
network.host: 192.168.165.239  #设置访问地址和端口号,否则不能在浏览器中访问
http.port: 9200

#cluster.name: es_cluster
node.name: node-1  #设置ES集群的集群名称,以及这台机器在集群中的名称
node.attr.rack: r1

path.data: /data/server/elasticsearch/data  #设置ES存储data和log的路径
path.logs: /data/logs/elasticsearch

#cluster.initial_master_nodes: ["node-1", "node-2"]
cluster.initial_master_nodes: ["node-1"]

注:Elasticsearch 要求不能使用超级用户root运行,所以我们建立一个es账号

# 创建es账户
adduser es
# 修改密码
passwd es

# 为esuser用户授予elasticsearch目录权限
# chown es -R /data/server/elasticsearch

前台启动:

# ./bin/elasticsearch

后台启动:

# ./bin/elasticsearch -d

在浏览器中访问:http://192.168.16.20:9200/

{
  "name" : "node-1",
  "cluster_name" : "elasticsearch",
  "cluster_uuid" : "_na_",
  "version" : {
    "number" : "7.1.0",
    "build_flavor" : "default",
    "build_type" : "tar",
    "build_hash" : "606a173",
    "build_date" : "2019-05-16T00:43:15.323135Z",
    "build_snapshot" : false,
    "lucene_version" : "8.0.0",
    "minimum_wire_compatibility_version" : "6.8.0",
    "minimum_index_compatibility_version" : "6.0.0-beta1"
  },
  "tagline" : "You Know, for Search"
}

若报如下错误:

bound or publishing to a non-loopback address, enforcing bootstrap checks
ERROR: [3] bootstrap checks failed
[1]: max file descriptors [4096] for elasticsearch process is too low, increase to at least [65535]
[2]: max virtual memory areas vm.max_map_count [65530] is too low, increase to at least [262144]
[3]: the default discovery settings are unsuitable for production use; at least one of [discovery.seed_hosts, discovery.seed_providers, cluster.initial_master_nodes] must be configured

(1)max file descriptors [4096] for elasticsearch process is too low, increase to at least [65536]

每个进程最大同时打开文件数太小,可通过下面2个命令查看当前数量

ulimit -Hn
ulimit -Sn

修改/etc/security/limits.conf文件,增加配置,用户退出后重新登录生效

*               soft    nofile          65536
*               hard    nofile          65536

(2)max number of threads [3818] for user [es] is too low, increase to at least [4096]

问题同上,最大线程个数太低。修改配置文件/etc/security/limits.conf,增加配置

*               soft    nproc           4096
*               hard    nproc           4096

可通过命令查看

ulimit -Hu
ulimit -Su

(3)max virtual memory areas vm.max_map_count [65530] is too low, increase to at least [262144]

修改/etc/sysctl.conf文件,在末尾增加配置vm.max_map_count=262144

vi /etc/sysctl.conf
sysctl -p

执行命令sysctl -p生效

Logstash

官方下载:https://www.elastic.co/downloads/logstash

# tar -zxf logstash-7.1.0.tar.gz -C /data/server/
# mv /data/server/logstash-7.1.0 /data/server/logstash
# cd /data/server/logstash/
# mkdir config_file
# vim config_file/log.conf

前台启动:

# bin/logstash -f config_file/log.conf

后台启动:

# nohup bin/logstash -f config_file/log.conf >/dev/null &

采集日志文件并传入Kafka的log.conf配置

input {
    file {
        path => ["/home/dubbo/applogs/*.log"]
        type => "appblog"
        start_position => beginning
        #sincedb_path => "/dev/null"
        #ignore_older => 0
        codec => multiline {
            pattern => "^\d{4}-\d{2}-\d{2} \d{2}:\d{2}:\d{2}.\d{3}"
            negate => true
            what => "previous"
        }
    }
}

output {
    kafka {
        topic_id => "logstash"
        bootstrap_servers => "192.168.16.20:9092"  # kafka的地址
        batch_size => 5
        codec => json
    }
}

接收Kafka并传入elasticsearch的log.conf配置

input{
    kafka {
        bootstrap_servers => "192.168.16.20:9092"
        topics => "logstash"
        group_id => "logstash"
        consumer_threads => 5
        decorate_events => true
        codec => json
        type => "appblog"
        #auto_offset_reset => "smallest"
        #reset_beginning => true
   }
}

filter {
    if [type] == "appblog" {
        if [message] =~ "^\d{4}-\d{2}-\d{2}\s\d{2}:\d{2}:\d{2}.\d{3}\s+\[[a-zA-Z0-9._-]+\]\s+\[[a-zA-Z0-9._-]+\][\s\S]*$" {
            grok {
                patterns_dir => "./patterns"
                add_field => {"logmatch" => "99999"}
                match => { "message" => "%{TIME_STAMP_A:logtime}\s+\[%{APP_NAME:appname}\]\s+\[%{LOG_LVL:loglvl}\]\s+\[%{TRACE_ID:traceid}\]\s+\[%{SPAN_ID:spanid}\]" }
            }
            date {
                match => ["logtime", "yyyy-MM-dd HH:mm:ss.SSS"]  
                target => "messagetime"
                #locale => "en"
                #timezone => "+00:00"
                #remove_field => ["logtime"]
            }
        }
    }
}

output {
    elasticsearch {
        hosts => ["192.168.16.20:9200"]
        #hosts => ["192.168.16.20:9200","192.168.16.22:9200"]
        index => "%{type}"
    }
}

Kibana

官方下载:https://www.elastic.co/downloads/kibana

# tar -zxf kibana-7.1.0-linux-x86_64.tar.gz -C /data/server/
# mv /data/server/kibana-7.1.0-linux-x86_64 /data/server/kibana
# cd /data/server/kibana/
# vim config/kibana.yml
server.port: 5601
server.host: "192.168.16.25"
elasticsearch.hosts: ["http://192.168.16.20:9200"]
xpack.reporting.encryptionKey: "yezhou"
xpack.security.encryptionKey: "78C87E5FC3656BE577BB41A80F45F537"

前台启动:

# ./bin/kibana

后台启动:

# nohup ./bin/kibana >/dev/null &

在浏览器中访问:192.168.16.20:5601,即可访问搜索

查看Kibana进程:

# ps -ef | grep node

版权声明:
作者:Joe.Ye
链接:https://www.appblog.cn/index.php/2023/03/19/elk-deployment-records/
来源:APP全栈技术分享
文章版权归作者所有,未经允许请勿转载。

THE END
分享
二维码
打赏
海报
ELK部署记录
Kafka 首先安装JDK环境,然后安装Kafka并创建topic logstash 官方下载:http://kafka.apache.org/downloads # bin/kafka-topics.sh --create --zookeeper 192……
<<上一篇
下一篇>>
文章目录
关闭
目 录