The OpenTelemetry Collector is a highly extensible service designed to receive, process, and export telemetry data. When paired with Elasticsearch or Grafana Loki, it becomes a robust solution for managing logs across distributed systems. This guide dives into configuring the OpenTelemetry Collector to send logs to Elasticsearch or Loki.

What is the OpenTelemetry Collector?

The OpenTelemetry Collector acts as middleware between telemetry sources (e.g., applications, services, or infrastructure) and telemetry backends like Grafana Loki. Its flexibility allows you to define pipelines that:

  1. Receive telemetry data using various receivers (e.g., filelogsyslog).
  2. Process the data using processors (e.g., batchfilter).
  3. Export it to destinations using exporters (e.g., elasticsearch,lokiotlphttp).

Prerequisites

Before configuring the Collector, ensure the following:

  1. OpenTelemetry Collector: Installed as a binary or container.
  2. Elasticsearch or Grafana Loki: Set up locally, on the cloud, or as part of Elasticsearch or Grafana Cloud.
  3. Basic YAML Knowledge: Required for pipeline configuration.

We need to provide a configuration file to configure the Collector to ingest OpenTelemetry logs from our application. This configuration file will define the components and their relationships. We will build the entire observability pipeline within this configuration file.

Step 1: Install the OpenTelemetry Collector

Binary Installation

Download the latest OpenTelemetry Collector binary:

curl -L -o otelcol https://github.com/open-telemetry/opentelemetry-collector-releases/releases/latest/download/otelcol_$(uname -s | tr '[:upper:]' '[:lower:]')_amd64  
chmod +x otelcol

Docker Installation

Alternatively, use the Docker image:

docker pull otel/opentelemetry-collector:latest

Step 2: Create the Configuration File

The configuration file is written using YAML configuration syntax. To start, we will create the otel-config.yaml file in the code editor:

Receive OpenTelemetry logs via gRPC and HTTP

First, we will configure the OpenTelemetry receiver. otlp: accepts logs in the OpenTelemetry format via HTTP and gRPC. We will use this receiver to receive logs from any configured application.

Now add the following configuration to the otel-config.yaml file:

# Receivers
receivers:
  otlp:
    protocols:
      grpc:
        endpoint: 0.0.0.0:4317
      http:
        endpoint: 0.0.0.0:4318

In this configuration:

  • receivers: The list of receivers to receive telemetry data. In this case, we are using the otlp receiver.
  • otlp: The OpenTelemetry receiver that accepts logs in the OpenTelemetry format.
  • protocols: The list of protocols that the receiver supports. In this case, we are using grpc and http.
  • grpc: The gRPC protocol configuration. The receiver will accept logs via gRPC on 4317.
  • http: The HTTP protocol configuration. The receiver will accept logs via HTTP on 4318.
  • endpoint: The IP address and port number to listen on. In this case, we are listening to all IP addresses on the port 4317 for gRPC and port 4318 for HTTP.

For more information on the otlp receiver configuration, see the OpenTelemetry Receiver OTLP documentation.

Create batches of logs using an OpenTelemetry Processor

Next, add the following configuration to the otel-config.yaml file:

# Processors
processors:
  batch:

In this configuration:

  • processors: The list of processors to process telemetry data. In this case, we are using the batch processor.
  • batch: The OpenTelemetry processor accepts telemetry data from other otelcol components and places them into batches.

For more information on the batch processor configuration, see the OpenTelemetry Processor Batch documentation.

Export logs to Elasticsearch or Loki using an OpenTelemetry Exporter

We will use the otlphttp/logs exporter to send the logs to the either Elasticsearch or Loki native OTLP endpoint. Add the following configuration to the otel-config.yaml file:

# Exporters
exporters:
  otlphttp/logs:
    endpoint: "http://loki:3100/otlp" 
    tls:
      insecure: true
exporters:
  otlp/elastic:
    # !!! Elastic APM https endpoint WITHOUT the "https://" prefix
    endpoint: "Elasticsearch_Host:443"
    compression: none
    headers:
      Authorization: "Bearer token"

In this configuration:

  • exporters: The list of exporters to export telemetry data. In this case, we are using the otlphttp/logs exporter.
  • otlphttp/logs: The OpenTelemetry exporter accepts telemetry data from other otelcol components and writes them over the network using the OTLP HTTP protocol.
  • endpoint: The URL to send the telemetry data to.
  • tls: The TLS configuration for the exporter. In this case, we are setting insecure to true to disable TLS verification.
  • insecure: Disables TLS verification. This is set to true as we are using an insecure connection.

For more information on the otlphttp/logs exporter configuration, see the OpenTelemetry Exporter OTLP HTTP documentation

Creating the Pipeline

Now that we have configured the receiver, processor, and exporter, we need to create a pipeline to connect these components. Add one of the following configurations to the otel-config.yaml file:

# Pipelines
service:
  pipelines:
    logs:
      receivers: [otlp]
      processors: [batch]
      exporters: [otlphttp/logs]
service:
  pipelines:
    traces:
      receivers: [otlp]
      processors: [batch]
      exporters: [spanmetrics, otlp/elastic]
    metrics:
      receivers: [otlp, spanmetrics]
      processors: [batch]
      exporters: [otlp/elastic]
    logs:
      receivers: [otlp]
      processors: [batch]
      exporters: [otlp/elastic]

In this configuration:

  • pipelines: The list of pipelines to connect the receiver, processor, and exporter. In this case, we are using the logs pipeline but there are also pipelines for metrics, traces, and continuous profiling.
  • receivers: The list of receivers to receive telemetry data. In this case, we are using the otlp receiver component we created earlier.
  • processors: The list of processors to process telemetry data. In this case, we are using the batch processor component we created earlier.
  • exporters: The list of exporters to export telemetry data. In this case, we are using the otlphttp/logs component exporter we created earlier.

Load the Configuration

Before you load the configuration into the OpenTelemetry Collector, compare your configuration with the completed configuration below:

# Receivers
receivers:
  otlp:
    protocols:
      grpc:
        endpoint: 0.0.0.0:4317
      http:
        endpoint: 0.0.0.0:4318
        
# Processors
processors:
  batch:

# Exporters
exporters:
  otlphttp/logs:
    endpoint: "http://loki:3100/otlp"
    tls:
      insecure: true
#exporters:
 # otlp/elastic:
    # !!! Elastic APM https endpoint WITHOUT the "https://" prefix
  #  endpoint: "Elasticsearch_Host:443"
   # compression: none
    #headers:
     # Authorization: "Bearer token"

      
# Pipelines
service:
  pipelines:
    logs:
      receivers: [otlp]
      processors: [batch]
      exporters: [otlphttp/logs]

Next, we need to apply the configuration to the OpenTelemetry Collector. To do this, we will restart the OpenTelemetry Collector container. After restarting within the logs, you should see the following message:

2024-12-02 11:48:33 2024-12-02T06:18:33.123Z    info    service@v0.111.0/service.go:234 Everything is ready. Begin running and processing data.

Advanced Configuration

Filtering Logs

Use the filter processor to exclude or include specific logs:

processors:  
  filter:  
    logs:  
      include:  
        match_type: regexp  
        attributes:  
          severity: "ERROR"

Conclusion

Configuring the OpenTelemetry Collector to send logs to Elasticsearch or Loki is a powerful way to centralize and analyze your system’s telemetry data. With receivers, processors, and exporters, the Collector provides unmatched flexibility for creating custom pipelines.

By following this guide, you’ll have a reliable setup to streamline your observability efforts.

Related Article:

OpenTelemetry with Elastic Observability

OpenTelemetry Collector: A Gateway to Modern Observability

Test and Analyze OpenTelemetry Collector processing

OpenTelemetry: Automatic vs. Manual Instrumentation — Which One Should You Use?

Configuration of the Elastic Distribution of OpenTelemetry Collector (EDOT)

Instrumenting a Java application with OpenTelemetry for distributed tracing and integrating with Elastic Observability

#otel #docker #kubernetes #devops #elasticsearch #observability #search #apm #APM #grafana #datadog #loki #kibana

Feel Free to Reach Out at Linkedin:


Discover more from Tech Insights & Blogs by Rahul Ranjan

Subscribe to get the latest posts sent to your email.

Leave a comment

Trending