Open source
Sending Logs to Loki via Kafka using AlloyAlloy natively supports receiving logs via Kafka. In this example, we will configure Alloy to receive logs via Kafka using two different methods:
loki.*
components.otelcol.*
components.Before you begin, ensure you have the following to run the demo:
In this scenario, we have a microservices application called the Carnivorous Greenhouse. This application consists of the following services:
Each service generates logs that are sent to Alloy via Kafka. In this example, they are sent on two different topics:
loki
: This sends a structured log formatted message (json).otlp
: This sends a serialized OpenTelemetry log message.You would not typically do this within your own application, but for the purposes of this example we wanted to show how Alloy can handle different types of log messages over Kafka.
Step 1: Environment setupIn this step, we will set up our environment by cloning the repository that contains our demo application and spinning up our observability stack using Docker Compose.
To get started, clone the repository that contains our demo application:
git clone -b microservice-kafka https://github.com/grafana/loki-fundamentals.git
Next we will spin up our observability stack using Docker Compose:
docker compose -f loki-fundamentals/docker-compose.yml up -d
This will spin up the following services:
â Container loki-fundamentals-grafana-1 Started
â Container loki-fundamentals-loki-1 Started
â Container loki-fundamentals-alloy-1 Started
â Container loki-fundamentals-zookeeper-1 Started
â Container loki-fundamentals-kafka-1 Started
We will be access two UI interfaces:
In this first step, we will configure Alloy to ingest raw Kafka logs. To do this, we will update the config.alloy
file to include the Kafka logs configuration.
config.alloy
file
Grafana Alloy requires a configuration file to define the components and their relationships. The configuration file is written using Alloy configuration syntax. We will build the entire observability pipeline within this configuration file. To start, we will open the config.alloy
file in the code editor:
loki-fundamentals
directory in a code editor of your choice.config.alloy
file in the top level directory, `loki-fundamentals'.config.alloy
file to open it in the code editor.You will copy all three of the following configuration snippets into the config.alloy
file.
First, we will configure the Loki Kafka source. loki.source.kafka
reads messages from Kafka using a consumer group and forwards them to other loki.*
components.
The component starts a new Kafka consumer group for the given arguments and fans out incoming entries to the list of receivers in forward_to
.
Add the following configuration to the config.alloy
file:
loki.source.kafka "raw" {
brokers = ["kafka:9092"]
topics = ["loki"]
forward_to = [loki.write.http.receiver]
relabel_rules = loki.relabel.kafka.rules
version = "2.0.0"
labels = {service_name = "raw_kafka"}
}
In this configuration:
brokers
: The Kafka brokers to connect to.topics
: The Kafka topics to consume. In this case, we are consuming the loki
topic.forward_to
: The list of receivers to forward the logs to. In this case, we are forwarding the logs to the loki.write.http.receiver
.relabel_rules
: The relabel rules to apply to the incoming logs. This can be used to generate labels from the temporary internal labels that are added by the Kafka source.version
: The Kafka protocol version to use.labels
: The labels to add to the incoming logs. In this case, we are adding a service_name
label with the value raw_kafka
. This will be used to identify the logs from the raw Kafka source in the Log Explorer App in Grafana.For more information on the loki.source.kafka
configuration, see the Loki Kafka Source documentation.
Next, we will configure the Loki relabel rules. The loki.relabel
component rewrites the label set of each log entry passed to its receiver by applying one or more relabeling rules and forwards the results to the list of receivers in the componentâs arguments. In our case we are directly calling the rule from the loki.source.kafka
component.
Now add the following configuration to the config.alloy
file:
loki.relabel "kafka" {
forward_to = [loki.write.http.receiver]
rule {
source_labels = ["__meta_kafka_topic"]
target_label = "topic"
}
}
In this configuration:
forward_to
: The list of receivers to forward the logs to. In this case, we are forwarding the logs to the loki.write.http.receiver
. Though in this case, we are directly calling the rule from the loki.source.kafka
component. So forward_to
is being used as a placeholder as it is required by the loki.relabel
component.rule
: The relabeling rule to apply to the incoming logs. In this case, we are renaming the __meta_kafka_topic
label to topic
.For more information on the loki.relabel
configuration, see the Loki Relabel documentation.
Lastly, we will configure the Loki write component. loki.write
receives log entries from other loki components and sends them over the network using the Loki logproto format.
And finally, add the following configuration to the config.alloy
file:
loki.write "http" {
endpoint {
url = "http://loki:3100/loki/api/v1/push"
}
}
In this configuration:
endpoint
: The endpoint to send the logs to. In this case, we are sending the logs to the Loki HTTP endpoint.For more information on the loki.write
configuration, see the Loki Write documentation.
Once added, save the file. Then run the following command to request Alloy to reload the configuration:
curl -X POST http://localhost:12345/-/reload
The new configuration will be loaded. You can verify this by checking the Alloy UI: http://localhost:12345.
Stuck? Need help?If you get stuck or need help creating the configuration, you can copy and replace the entire config.alloy
using the completed configuration file:
cp loki-fundamentals/completed/config-raw.alloy loki-fundamentals/config.alloy
curl -X POST http://localhost:12345/-/reload
Step 3: Configure Alloy to ingest OpenTelemetry logs via Kafka
Next we will configure Alloy to also ingest OpenTelemetry logs via Kafka, we need to update the Alloy configuration file once again. We will add the new components to the config.alloy
file along with the existing components.
config.alloy
file
Like before, we generate our next pipeline configuration within the same config.alloy
file. You will add the following configuration snippets to the file in addition to the existing configuration. Essentially, we are configuring two pipelines within the same Alloy configuration file.
First, we will configure the OpenTelemetry Kafka receiver. otelcol.receiver.kafka
accepts telemetry data from a Kafka broker and forwards it to other otelcol.*
components.
Now add the following configuration to the config.alloy
file:
otelcol.receiver.kafka "default" {
brokers = ["kafka:9092"]
protocol_version = "2.0.0"
topic = "otlp"
encoding = "otlp_proto"
output {
logs = [otelcol.processor.batch.default.input]
}
}
In this configuration:
brokers
: The Kafka brokers to connect to.protocol_version
: The Kafka protocol version to use.topic
: The Kafka topic to consume. In this case, we are consuming the otlp
topic.encoding
: The encoding of the incoming logs. Which decodes messages as OTLP protobuf.output
: The list of receivers to forward the logs to. In this case, we are forwarding the logs to the otelcol.processor.batch.default.input
.For more information on the otelcol.receiver.kafka
configuration, see the OpenTelemetry Receiver Kafka documentation.
Next, we will configure a OpenTelemetry processor. otelcol.processor.batch
accepts telemetry data from other otelcol components and places them into batches. Batching improves the compression of data and reduces the number of outgoing network requests required to transmit data. This processor supports both size and time based batching.
Now add the following configuration to the config.alloy
file:
otelcol.processor.batch "default" {
output {
logs = [otelcol.exporter.otlphttp.default.input]
}
}
In this configuration:
output
: The list of receivers to forward the logs to. In this case, we are forwarding the logs to the otelcol.exporter.otlphttp.default.input
.For more information on the otelcol.processor.batch
configuration, see the OpenTelemetry Processor Batch documentation.
Lastly, we will configure the OpenTelemetry exporter. otelcol.exporter.otlphttp
accepts telemetry data from other otelcol components and writes them over the network using the OTLP HTTP protocol. We will use this exporter to send the logs to the Loki native OTLP endpoint.
Finally, add the following configuration to the config.alloy
file:
otelcol.exporter.otlphttp "default" {
client {
endpoint = "http://loki:3100/otlp"
}
}
In this configuration:
client
: The client configuration for the exporter. In this case, we are sending the logs to the Loki OTLP endpoint.For more information on the otelcol.exporter.otlphttp
configuration, see the OpenTelemetry Exporter OTLP HTTP documentation.
Once added, save the file. Then run the following command to request Alloy to reload the configuration:
curl -X POST http://localhost:12345/-/reload
The new configuration will be loaded. You can verify this by checking the Alloy UI: http://localhost:12345.
Stuck? Need help (Full Configuration)?If you get stuck or need help creating the configuration, you can copy and replace the entire config.alloy
. This differs from the previous Stuck? Need help
section as we are replacing the entire configuration file with the completed configuration file. Rather than just adding the first Loki Raw Pipeline configuration.
cp loki-fundamentals/completed/config.alloy loki-fundamentals/config.alloy
curl -X POST http://localhost:12345/-/reload
Step 3: Start the Carnivorous Greenhouse
In this step, we will start the Carnivorous Greenhouse application. To start the application, run the following command:
Note
This docker-compose file relies on the
loki-fundamentals_loki
docker network. If you have not started the observability stack, you will need to start it first.
docker compose -f loki-fundamentals/greenhouse/docker-compose-micro.yml up -d --build
This will start the following services:
â Container greenhouse-db-1 Started
â Container greenhouse-websocket_service-1 Started
â Container greenhouse-bug_service-1 Started
â Container greenhouse-user_service-1 Started
â Container greenhouse-plant_service-1 Started
â Container greenhouse-simulation_service-1 Started
â Container greenhouse-main_app-1 Started
Once started, you can access the Carnivorous Greenhouse application at http://localhost:5005. Generate some logs by interacting with the application in the following ways:
Finally to view the logs in Loki, navigate to the Loki Logs Explore view in Grafana at http://localhost:3000/a/grafana-lokiexplore-app/explore.
SummaryIn this example, we configured Alloy to ingest logs via Kafka. We configured Alloy to ingest logs in two different formats: raw logs and OpenTelemetry logs. Where to go next?
Further readingFor more information on Grafana Alloy, refer to the following resources:
Complete metrics, logs, traces, and profiling exampleIf you would like to use a demo that includes Mimir, Loki, Tempo, and Grafana, you can use Introduction to Metrics, Logs, Traces, and Profiling in Grafana. Intro-to-mltp
provides a self-contained environment for learning about Mimir, Loki, Tempo, and Grafana.
The project includes detailed explanations of each component and annotated configurations for a single-instance deployment. Data from intro-to-mltp
can also be pushed to Grafana Cloud.
RetroSearch is an open source project built by @garambo | Open a GitHub Issue
Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo
HTML:
3.2
| Encoding:
UTF-8
| Version:
0.7.4