kafka, alias: Azure_Event_Hub) lets you ingest data from your Azure
Event Hubs instances into a telemetry pipeline.
This is a pull-based source plugin.
Supported telemetry types
The for Chronosphere Telemetry Pipeline supports these telemetry types:| Logs | Metrics | Traces |
|---|---|---|
Configuration parameters
Use the parameters in this section to configure the . The Telemetry Pipeline web interface uses the items in the Name column to describe these parameters. Pipeline configuration files use the items in the Key column as YAML keys.General
| Name | Key | Description | Default |
|---|---|---|---|
| Event Hub Namespace | brokers | Required. Your Event Hub namespace. | [REPLACE WITH YOUR NAMESPACE].servicebus.windows.net:9093 |
| Event Hub Topic | topics | Required. The Event Hub topic to read information from. | none |
| Connection String Key | rdkafka.sasl.password | Required. The Event Hub connection string from within the connection access policy set for the source. | Endpoint=[REPLACE WITH YOUR CONNECTION STRING VALUE] |
Advanced
| Name | Key | Description | Default |
|---|---|---|---|
| Minimum Queued Messages | dkafka.queued.min.messages | Minimum number of messages per topic and partition that Telemetry Pipeline tries to maintain in the local consumer queue. | 10 |
| Request Timeout (ms) | rdkafka.request.timeout.ms | How long Telemetry Pipeline waits before terminating a request connection. Recommended value: 60000. | 60000 |
| Session Timeout (ms) | rdkafka.session.timeout.ms | How long Telemetry Pipeline waits before prior to terminating a session connection. Recommended value: 30000. | 30000 |
| SASL Username | rdkafka.sasl.username | SASL username. | $ConnectionString |
| Security Protocol | rdkafka.security.protocol | The security protocol for Azure Event Hub. If you require OAuth or OpenID, contact Chronosphere Support. | SASL_SSL |
| SASL Mechanism | rdkafka.sasl.mechanism | The transport mechanism for the SASL connection. | PLAIN |
| Memory Buffer Limit | mem_buf_limit | For pipelines with the Deployment or DaemonSet workload type only. Sets a limit for how much buffered data the plugin can write to memory, which affects backpressure. This value must follow Fluent Bit’s rules for unit sizes. If unspecified, no limit is enforced. | none |
Other
This parameter doesn’t have an equivalent setting in the Telemetry Pipeline web interface, but you can use it in pipeline configuration files.| Name | Key | Description | Default |
|---|---|---|---|
| none | buffer_max_size | Sets the maximum chunk size for buffered data. If a single log exceeds this size, the plugin drops that log. | 4M |
Extended librdkafka parameters
This plugin uses the librdkafka library. Certain configuration parameters available through the Telemetry Pipeline UI are based on librdkafka settings. These parameters generally use therdkafka. prefix.
In addition to the parameters available through the Telemetry Pipeline UI, you can
customize any of the
librdkafka configuration properties
by adding them to a pipeline configuration file. To do so, append the rdkafka.
prefix to the name of that property.
For example, to customize the socket.keepalive.enable property, add the
rdkafka.socket.keepalive.enable key to your configuration file.
Don’t use librdkafka properties to configure a pipeline’s memory buffer. Instead,
use the
buffer_max_size parameter.