[error]: #0 unexpected error on reading data host="192.168.16.9" port=64283 error_class=NoMethodError error="undefined method `empty?' for nil:NilClass"
# built-in TCP input
<source>
@type forward
port 24224
bind 0.0.0.0
</source>
# Kafka buffered output
<match **>
@type kafka_buffered
brokers 192.168.16.8:9092
buffer_type file
buffer_path /var/log/fluent/buffer-01/*.buffer
flush_interval 10s
buffer_chunk_limit 8m
buffer_queue_limit 3600
retry_wait 1s
max_retry_wait 60s
disable_retry_limit true
# ruby-kafka producer options
max_send_retries 1
required_acks 1
compression_codec gzip
output_data_type json
output_include_time true
num_threads 3
</match>
<system>
# loglevel: trace, debug, info, warn, error
log_level trace
</system>
This error lead to error, as follow: emit transaction failed: error_class=NoMethodError error="undefined method `empty?' for nil:NilClass" tag="json.test"
The error's position is /fluentd-1.1.3/lib/fluent/plugin/buffer.rb:544:in `write_once'

Fluentd send its debug log, also exists this error: [warn]: #0 emit transaction failed: error_class=NoMethodError error="undefined method `empty?' for nil:NilClass" tag="fluent.trace"
I have same issue.
@felixzh2015 Could you show the reproducible step? I tested your configuration with fluentd v1.2.0 and fluent-plugin-kafka v0.7.2 but no above error happen.
This can't always happen. You need to run for a long time. I use fluent-logger-java. @repeatedly
I have same issue in v1.2.1
emit transaction failed: error_class=NoMethodError error="undefined method `empty?' for nil:NilClass" location="/var/lib/gems/2.3.0/gems/fluentd-1.2.1/lib/fluent/plugin/buffer.rb:544:in `write_once'
I'm trying to reproduce this issue with current master( e2259a565d0c63760c1e8cfac357042448829de2), but I couldn't reproduce it....
This can't always happen. You need to run for a long time. You need to send big data
We know that such an error occurs when a chunk file is corrupted due to an abnormal outage, etc. We are working on improving the feature for chunk file corruption in this issue.
- #3970
So I close this issue now, but if you are still experiencing these errors even though an abnormal outage doesn't occur, please contact us.