没用过logstash,看了好长时间,也把日志全都整理成了纯JSON格式,但按照网上看来看去的所谓的教程,完全写不出来
input {
file {
path => ["/your/log/path/1.log"]
start_position => "beginning"
type => "type1"
codec => multiline {
pattern => "^\d"
negate => true
what => "previous"
}
}
file {
path => ["/your/log/path/2.log"]
start_position => "beginning"
type => "type2"
codec => multiline {
pattern => "^{"
negate => true
what => "previous"
}
}
}
filter {
if [type] == "type2" {
mutate {
remove_field => ["@version", "host", "path", "@timestamp", "type", "tags"]
}
}
}
output {
if [type] == "type1" {
elasticsearch {
index => 'log-index1-%{+YYYY-MM-dd}'
hosts => ["192.168.41.31:9200"]
user => "logger"
password => "password"
}
} else {
kafka {
bootstrap_servers => "192.168.41.33:9092,192.168.41.20:9092,192.168.41.32:9092"
topic_id => "healthTrace"
client_id => "logstash"
batch_size => 200
codec => plain {
format => "%{message}"
}
}
}
}
我还在CSDN论坛上发了帖子,举了四个日志样例,根据那样的案例,要怎么写呢?
帖子链接: https://bbs.csdn.net/topics/392442985
数据源有两个文件,第一个文件是一般的日志文件日志格式就是log4j大家最常用的格式,这个文件需要logstash读取之后吐到elasticsearch中。第二份文件是日志埋点文件,数据格式是json,这份文件需要经过logstash吐到kafka中给埋点组件进行消费
都只听说用一个文件就行了,怎么还需要两个日志文件?
这里有2个实际的例子,每个参数的注释都很详细,希望可以帮到你。