关于flume和kafka结合效率的问题

最近做了个测试。是flume+kafka的。是读取文件夹的。31M的文件读了很长时间。大概20分钟。不知道什么原因。哪位大神知道啊。指导下。
下面是flume的配置

#agent section producer.sources = s
producer.channels = c
producer.sinks = r

#source section
#producer.sources.s.type = seq
#producer.sources.s.channels = c
producer.sources.s.type =spooldir

producer.sources.s.spoolDir = /home/lb/data
producer.sources.s.channels = c

Each sink's type must be defined

producer.sinks.r.type = org.apache.flume.plugins.KafkaSink
producer.sinks.r.metadata.broker.list=127.0.0.1:9092
producer.sinks.r.partition.key=1
producer.sinks.r.partitioner.class=org.apache.flume.plugins.SinglePartition
producer.sinks.r.serializer.class=kafka.serializer.StringEncoder
producer.sinks.r.request.required.acks=0
producer.sinks.r.max.message.size=1000000
producer.sinks.r.producer.type=async
producer.sinks.r.custom.encoding=UTF-8
producer.sinks.r.custom.topic.name=lbTestToptic

#Specify the channel the sink should use
producer.sinks.r.channel = c

Each channel's type is defined.

producer.channels.c.type = memory
producer.channels.c.capacity = 100000
producer.channels.c.transactionCapacity = 10000

希望有知道原因的大神给个帮助啊。谢谢

不仅仅要看文件的大小(31MB),还要看行数