分布式日志采集框架Flume

Flume概述

Flume is a distributed, reliable, and available service for efficiently collecting(收集), aggregating(聚合), and moving(移动) large amounts of log data.

  • Flume官网:http://flume.apache.org/
  • FLume是一个分布式、高可靠、高可用的服务,用于分布式海量日志高效收集、聚合、移动系统。

设计目标

  • 可靠性
  • 扩展性
  • 管理性

业界同类产品对比

  • Flume: Cloudera/Apache Java
  • Scribe: Facebook C/C++ 不再维护
  • Chukwa: Yahoo/Apache Java 不再维护
  • Fluentd: Ruby
  • Logstash: ELK(Elasticsearch,Kibana)

Flume架构及核心组件

  • Source 收集
  • Channel 聚集
  • Sink 输出

Flume环境部署

前置条件

  1. Java Runtime Environment - Java 1.8 or later
  2. Memory - Sufficient memory for configurations used by sources, channels or sinks
  3. Disk Space - Sufficient disk space for configurations used by channels or sinks
  4. Directory Permissions - Read/Write permissions for directories used by agent

安装jdk

下载解压到~/app
将java配置到系统环境变量中:vi ~/.bash_profile

1
2
3
export JAVA_HOME=/root/app/jdk1.8.0_191
export PATH=$JAVA_HOME/bin:$PATH
source ~/.bash_profile

检测:java -version

安装Flume

下载解压到~/app
将flume配置到系统环境变量中:

1
2
3
export FLUME_HOME=/root/app/apache-flume-1.6.0-cdh5.7.0-bin
export PATH=$FLUME_HOME/bin:$PATH
source ~/.bash_profile

flume-env.sh的配置:export JAVA_HOME=/root/app/jdk1.8.0_191
检测:flume-ng version

Flume实战

需求1:从指定网络端口采集数据输出到控制台

使用flume的关键就是写配置文件:

  1. 配置source
  2. 配置channel
  3. 配置sink
  4. 把以上3个组件串起来

配置文件

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
# example.conf: A single-node Flume configuration

# a1:agent名称
# r1:source名称
# k1:sink名称
# c1:channel名称

# Name the components on this agent
a1.sources = r1
a1.sinks = k1
a1.channels = c1

# Describe/configure the source
a1.sources.r1.type = netcat
a1.sources.r1.bind = localhost
a1.sources.r1.port = 44444

# Describe the sink
a1.sinks.k1.type = logger

# Use a channel which buffers events in memory
a1.channels.c1.type = memory

# a1.channels.c1.capacity = 1000
# a1.channels.c1.transactionCapacity = 100

# Bind the source and sink to the channel
a1.sources.r1.channels = c1
a1.sinks.k1.channel = c1

启动agent

1
2
3
4
5
flume-ng agent \
--name a1 \
--conf $FLUME_HOME/conf \
--conf-file $FLUME_HOME/conf/example.conf \
-Dflume.root.logger=INFO,console

使用telnet进行测试:telnet localhost 44444

1
Event: { headers:{} body: 68 65 6C 6C 6F 0D     hello. }

Event是Flume数据传输的基本单元
Event = 可选的header + byte array

需求2:监控一个文件实时采集新增的数据输出到控制台

Agent选型:exec source + memory channel + logger sink

配置文件

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
# example.conf: A single-node Flume configuration

# Name the components on this agent
a1.sources = r1
a1.sinks = k1
a1.channels = c1

# Describe/configure the source
a1.sources.r1.type = exec
a1.sources.r1.command = tail -F /root/data/data.log
a1.sources.r1.shell = /bin/bash -c

# Describe the sink
a1.sinks.k1.type = logger

# Use a channel which buffers events in memory
a1.channels.c1.type = memory

# a1.channels.c1.capacity = 1000
# a1.channels.c1.transactionCapacity = 100

# Bind the source and sink to the channel
a1.sources.r1.channels = c1
a1.sinks.k1.channel = c1

启动agent

1
2
3
4
5
flume-ng agent \
--name a1 \
--conf $FLUME_HOME/conf \
--conf-file $FLUME_HOME/conf/exec-memory-logger.conf \
-Dflume.root.logger=INFO,console

需求3:将A服务器上的日志实时采集到B服务器

技术选型:exec source + memory channel + avro sink
avro source + memory channel + logger sink

配置文件1:exec-memory-avro.conf

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
exec-memory-avro.sources = exec-source
exec-memory-avro.sinks = avro-sink
exec-memory-avro.channels = memory-channel

exec-memory-avro.sources.exec-source.type = exec
exec-memory-avro.sources.exec-source.command = tail -F /root/data/data.log
exec-memory-avro.sources.exec-source.shell = /bin/bash -c

exec-memory-avro.sinks.avro-sink.type = avro
exec-memory-avro.sinks.avro-sink.hostname = bigdata-01
exec-memory-avro.sinks.avro-sink.port = 44444

exec-memory-avro.channels.memory-channel.type = memory

exec-memory-avro.sources.exec-source.channels = memory-channel
exec-memory-avro.sinks.avro-sink.channel = memory-channel

配置文件2:avro-memory-logger.conf

1
2
3
4
5
6
7
8
9
10
11
12
13
14
avro-memory-logger.sources = avro-source
avro-memory-logger.sinks = logger-sink
avro-memory-logger.channels = memory-channel

avro-memory-logger.sources.avro-source.type = avro
avro-memory-logger.sources.avro-source.bind = bigdata-01
avro-memory-logger.sources.avro-source.port = 44444

avro-memory-logger.sinks.logger-sink.type = logger

avro-memory-logger.channels.memory-channel.type = memory

avro-memory-logger.sources.avro-source.channels = memory-channel
avro-memory-logger.sinks.logger-sink.channel = memory-channel

先启动avro-memory-logger

1
2
3
4
5
flume-ng agent \
--name avro-memory-logger \
--conf $FLUME_HOME/conf \
--conf-file $FLUME_HOME/conf/avro-memory-logger.conf \
-Dflume.root.logger=INFO,console

再启动exec-memory-avro

1
2
3
4
5
flume-ng agent \
--name exec-memory-avro \
--conf $FLUME_HOME/conf \
--conf-file $FLUME_HOME/conf/exec-memory-avro.conf \
-Dflume.root.logger=INFO,console

日志收集过程:

  1. 机器A上监控一个文件,当我们访问主站的时候,会有用户行为日志记录到access.log中
  2. avro sink 把新产生的日志输出到对应的 avro source 指定的 hostname 和 port 上
  3. 通过 avro source 对应的 agent 将我们的日志输出到控制台(kafka)