Kafka exporter exports logs, metrics, and traces to Kafka. This exporter uses a synchronous producer that blocks and does not batch messages, therefore it should be used with batch and queued retry processors for higher throughput and resiliency. Message payload encoding is configurable.
Configuration settings[!NOTE] You can opt-in to use
franz-go
client by enabling the feature gateexporter.kafkaexporter.UseFranzGo
when you run the OpenTelemetry Collector. See the following page for more details: Feature Gates
There are no required settings.
The following settings can be optionally configured:
brokers
(default = localhost:9092): The list of kafka brokers.protocol_version
(default = 2.1.0): Kafka protocol version.resolve_canonical_bootstrap_servers_only
(default = false): Whether to resolve then reverse-lookup broker IPs during startup.client_id
(default = "otel-collector"): The client ID to configure the Kafka client with. The client ID will be used for all produce requests.logs
topic
(default = otlp_logs): The name of the Kafka topic to which logs will be exported.encoding
(default = otlp_proto): The encoding for logs. See Supported encodings.topic_from_metadata_key
(default = ""): The name of the metadata key whose value should be used as the message's topic. Useful to dynamically produce to topics based on request inputs. It takes precedence over topic_from_attribute
and topic
settings.metrics
topic
(default = otlp_metrics): The name of the Kafka topic from which to consume metrics.encoding
(default = otlp_proto): The encoding for metrics. See Supported encodings.topic_from_metadata_key
(default = ""): The name of the metadata key whose value should be used as the message's topic. Useful to dynamically produce to topics based on request inputs. It takes precedence over topic_from_attribute
and topic
settings.traces
topic
(default = otlp_spans): The name of the Kafka topic from which to consume traces.encoding
(default = otlp_proto): The encoding for traces. See Supported encodings.topic_from_metadata_key
(default = ""): The name of the metadata key whose value should be used as the message's topic. Useful to dynamically produce to topics based on request inputs. It takes precedence over topic_from_attribute
and topic
settings.topic
(Deprecated in v0.124.0: use logs::topic
, metrics::topic
, and traces::topic
) If specified, this is used as the default topic, but will be overridden by signal-specific configuration. See Destination Topic below for more details.topic_from_attribute
(default = ""): Specify the resource attribute whose value should be used as the message's topic. See Destination Topic below for more details.encoding
(Deprecated in v0.124.0: use logs::encoding
, metrics::encoding
, and traces::encoding
) If specified, this is used as the default encoding, but will be overridden by signal-specific configuration. See Supported encodings below for more details.include_metadata_keys
(default = []): Specifies a list of metadata keys to propagate as Kafka message headers. If one or more keys aren't found in the metadata, they are ignored. The keys also partition the data before export if sending_queue::batch
is defined.partition_traces_by_id
(default = false): configures the exporter to include the trace ID as the message key in trace messages sent to kafka. Please note: this setting does not have any effect on Jaeger encoding exporters since Jaeger exporters include trace ID as the message key by default.partition_metrics_by_resource_attributes
(default = false) configures the exporter to include the hash of sorted resource attributes as the message partitioning key in metric messages sent to kafka.partition_logs_by_resource_attributes
(default = false) configures the exporter to include the hash of sorted resource attributes as the message partitioning key in log messages sent to kafka.tls
: see TLS Configuration Settings for the full set of available options.auth
plain_text
(Deprecated in v0.123.0: use sasl with mechanism set to PLAIN instead.)
username
: The username to use.password
: The password to usesasl
username
: The username to use.password
: The password to usemechanism
: The SASL mechanism to use (SCRAM-SHA-256, SCRAM-SHA-512, AWS_MSK_IAM_OAUTHBEARER, or PLAIN)version
(default = 0): The SASL protocol version to use (0 or 1)aws_msk
region
: AWS Region in case of AWS_MSK_IAM_OAUTHBEER mechanismtls
(Deprecated in v0.124.0: configure tls at the top level): this is an alias for tls at the top level.kerberos
service_name
: Kerberos service namerealm
: Kerberos realmuse_keytab
: Use of keytab instead of password, if this is true, keytab file will be used instead of passwordusername
: The Kerberos username used for authenticate with KDCpassword
: The Kerberos password used for authenticate with KDCconfig_file
: Path to Kerberos configuration. i.e /etc/krb5.confkeytab_file
: Path to keytab file. i.e /etc/security/kafka.keytabdisable_fast_negotiation
: Disable PA-FX-FAST negotiation (Pre-Authentication Framework - Fast). Some common Kerberos implementations do not support PA-FX-FAST negotiation. This is set to false
by default.metadata
full
(default = true): Whether to maintain a full set of metadata. When disabled, the client does not make the initial request to broker at the startup.refresh_interval
(default = 10m): The refreshInterval controls the frequency at which cluster metadata is refreshed in the background.retry
max
(default = 3): The number of retries to get metadatabackoff
(default = 250ms): How long to wait between metadata retriestimeout
(default = 5s): Time to wait per individual attempt to produce data to Kafka.retry_on_failure
enabled
(default = true)initial_interval
(default = 5s): Time to wait after the first failure before retrying; ignored if enabled
is false
max_interval
(default = 30s): Is the upper bound on backoff; ignored if enabled
is false
max_elapsed_time
(default = 120s): Is the maximum amount of time spent trying to send a batch; ignored if enabled
is false
sending_queue
enabled
(default = true)num_consumers
(default = 10): Number of consumers that dequeue batches; ignored if enabled
is false
queue_size
(default = 1000): Maximum number of batches kept in memory before dropping data; ignored if enabled
is false
; User should calculate this as num_seconds * requests_per_second
where:
num_seconds
is the number of seconds to buffer in case of a backend outagerequests_per_second
is the average number of requests per seconds.producer
max_message_bytes
(default = 1000000) the maximum permitted size of a message in bytesrequired_acks
(default = 1) controls when a message is regarded as transmitted. https://docs.confluent.io/platform/current/installation/configuration/producer-configs.html#ackscompression
(default = 'none') the compression used when producing messages to kafka. The options are: none
, gzip
, snappy
, lz4
, and zstd
https://docs.confluent.io/platform/current/installation/configuration/producer-configs.html#compression-typecompression_params
level
(default = -1) the compression level used when producing messages to kafka.compression
and level
gzip
1
9
-1
zstd
1
3
6
11
lz4
Only supports fast levelsnappy
No compression levels supported yetflush_max_messages
(default = 0) The maximum number of messages the producer will send in a single broker request.The Kafka exporter supports encoding extensions, as well as the following built-in encodings.
Available for all signals:
otlp_proto
: data is encoded as OTLP Protobufotlp_json
: data is encoded as OTLP JSONAvailable only for traces:
jaeger_proto
: the payload is serialized to a single Jaeger proto Span
, and keyed by TraceID.jaeger_json
: the payload is serialized to a single Jaeger JSON Span using jsonpb
, and keyed by TraceID.zipkin_proto
: the payload is serialized to Zipkin v2 proto Span.zipkin_json
: the payload is serialized to Zipkin v2 JSON Span.Available only for logs:
raw
: if the log record body is a byte array, it is sent as is. Otherwise, it is serialized to JSON. Resource and record attributes are discarded.Example configuration:
exporters:
kafka:
brokers:
- localhost:9092
Destination Topic
The destination topic can be defined in a few different ways and takes priority in the following order:
<signal>.topic_from_metadata_key
is set to use a key from the request metadata, the value of this key is used as the signal specific topic.topic_from_attribute
is configured, and the corresponding attribute is found on the ingested data, the value of this attribute is used.topic.WithTopic
function (from the github.com/open-telemetry/opentelemetry-collector-contrib/pkg/kafka/topic
package), the value set in the context is used.<signal>::topic
configuration is used for the signal-specific destination topic. If this is not explicitly configured, the topic
configuration (deprecated in v0.124.0) is used as a fallback for all signals.RetroSearch is an open source project built by @garambo | Open a GitHub Issue
Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo
HTML:
3.2
| Encoding:
UTF-8
| Version:
0.7.4