Confluent Cloud DataSet destination plugin

The Confluent Cloud destination plugin lets you send Chronosphere Telemetry Pipeline data to Confluent Cloud.

Configuration parameters

The Confluent Cloud destination plugin provides these configuration parameters.

General

NameKeyDescriptionDefault
Confluent Cloud Bootstrap ServersbrokersRequired. The Confluent Cloud bootstrap servers can be found within the cluster configuration settings.SERVERNAME.confluent.cloud:9092
Confluent Cloud TopictopicsRequired. The Confluent Cloud Topic to send information to.none
Confluent Cloud API Keyrdkafka.sasl.usernameRequired. Confluent Cloud API Key.none
Confluent Cloud API Secretrdkafka.sasl.passwordRequired. The Confluent Cloud API Secret.none
FormatformatSpecify data a format (json or msgpack).json

Advanced

NameKeyDescriptionDefault
Message Keymessage_keyOptional Key to store the message.none
Message Key Fieldmessage_key_fieldIf set, the value of Message_Key_Field in the record will indicate the message key. If not set nor found in the record, Message_Key will be used if set.none
Timestamp Keytimestamp_keySet the key to store the record timestamp.none
Timestamp Formattimestamp_formatSet the format to iso8601 or double.double
Body Keybody_keySpecify the key which contains the body.none
Queue Full Retriesqueue_full_retriesFluent Bit queues data into rdkafka library. If the underlying library can't flush the records the queue might fill and block new addition of records. This option sets the number of local retries to enqueue the data. The interval between each retry is 1 second. Setting queue_full_retries to 0 to set an unlimited number of retries.10
Security Protocolrdkafka.security.protocolThe Security Protocol used to communicate with Confluent Cloud.SASL_SSL
SASL Mechanismrdkafka.sasl.mechanismsThe SASL authentication mechanism for the API.PLAIN