max_send_limit_bytes - default: nil - Max byte size to send message to avoid MessageSizeTooLarge.kafka_agg_max_messages - default: nil - Maximum number of messages to include in one batch transmission.kafka_agg_max_bytes - default: 4096 - Maximum value of total message size to be included in one batch transmission.compression_codec - default: nil - The codec the producer uses to compress messages.ack_timeout - default: nil - How long the producer waits for acks.If you need flush performance, set lower value, e.g. required_acks - default: -1 - The number of acks required per request.max_send_retries - default: 1 - Number of times to retry sending of messages to a leader.Supports following ruby-kafka's producer options. Ruby-kafka's log is routed to fluentd log so you can see ruby-kafka's log in fluentd logs. In this case, get_kafka_client_log is useful for identifying the error cause. Ruby-kafka sometimes returns Kafka::DeliveryFailed error without good information. of output_data_type uses fluentd's formatter plugins. Max_send_limit_bytes (integer) :default => nil (No drop)ĭiscard_kafka_delivery_failed (bool) :default => false (No discard) Kafka_agg_max_messages (integer) :default => nil (No limit) Kafka_agg_max_bytes (integer) :default => 4096 # See fluentd document for buffer related parameters: Īck_timeout (integer) :default => nil (Use default of ruby-kafka)Ĭompression_codec (gzip|snappy) :default => nil (No compression) Get_kafka_client_log (bool) :default => false Output_include_time (bool) :default => falseĮxclude_topic_key (bool) :default => falseĮxclude_partition_key (bool) :default => false Output_include_tag (bool) :default => false Output_data_type (json|ltsv|msgpack|attr:|) :default => json Zookeeper_path :default => /brokers/ids # Set path in zookeeper for kafkaĭefault_partition_key (string) :default => nilĭefault_message_key (string) :default => nil If you are not familiar with zookeeper, use brokers parameters. # Brokers: you can choose either brokers or zookeeper. This plugin works with recent kafka versions. This plugin uses ruby-kafka producer for writing data. See also ruby-kafka README for more detailed documentation about ruby-kafka options. Start_from_beginning (bool) :default => true Offset_commit_threshold (integer) :default => nil (Use default of ruby-kafka) Offset_commit_interval (integer) :default => nil (Use default of ruby-kafka) Input plugin 'kafka_group', supports kafka group)Ĭonsume events by kafka consumer group features. See also ruby-kafka README for more detailed documentation about ruby-kafka. Supports a start of processing from the assigned offset for specific topics. Min_bytes (integer) :default => nil (Use default of ruby-kafka) Max_wait_time (integer) :default => nil (Use default of ruby-kafka) Max_bytes (integer) :default => nil (Use default of ruby-kafka) Offset_zk_root_node default => '/fluent-plugin-kafka' # Optionally, you can manage topic offset by using zookeeper Input plugin 'kafka')Ĭonsume events by single consumer. See Authentication using SASL for more details. Set principal and path to keytab for SASL/GSSAPI authentication. See Encryption and Authentication using SSL for more detail. Usage Common parameters SSL authentication Output plugins work with kafka v0.8 or later.Input plugins work with kafka v0.9 or later.zookeeper gem includes native extension, so development tools are needed, e.g. If you want to use zookeeper related parameters, you also need to install zookeeper gem. Or install it yourself as: $ gem install fluent-plugin-kafka TODO: Also, I need to write tests InstallationĪdd this line to your application's Gemfile: gem ' fluent-plugin-kafka ' Fluent-plugin-kafka, a plugin for FluentdĪ fluentd plugin to both consume and produce data for Apache Kafka.
0 Comments
Leave a Reply. |