Recommended ingestion configuration
Ingesting log data is a key step to ensure you parse your log data correctly before that data enters Chronosphere Observability Platform.
Although you can use your existing ingestion pipeline, Chronosphere recommends using Core Agent or Core Agent plus Telemetry Pipeline.
To get started with ingesting logs using Telemetry Pipeline:
- Create an ingest API token.
- Choose which configuration you want to ingest log data:
Create an ingest API token
Before ingesting log data, you need to create an API token to authenticate with and ingest data to Observability Platform. See Ingest tokens (opens in a new tab) in the LogScale documentation for more information about creating ingest tokens.
To create an ingest API token, you must have administrative permissions.
- In the navigation menu select Explorers > Logs Explorer.
- Click Repository settings to open LogScale repository settings in a new tab.
- In the LogScale interface, select the repository you want to create an API token for.
- In the main LogScale navigation, click Settings.
- In the Ingest section of the sidebar navigation, click Ingest tokens.
- Click Add token to create a new ingest token.
- Enter a name for your token.
- Optional: Assign a parser for your token if you want to parse data during ingestion.
- Click Save to save your ingest token.
Store your ingest API token in a secure location. If you lose your token, you must create a new one.
Use this ingest API token in your Telemetry Pipeline configuration to authenticate with and begin sending log data to Observability Platform.
Core Agent
You can run Core Agent (opens in a new tab) as an agent that collects data from your app, parses that data, and sends the data directly to Observability Platform. Use this deployment method if:
- You're comfortable managing YAML-based configuration files.
- You plan on parsing data in Core Agent, and don't want to add Telemetry Pipeline as another component in your ingestion pipeline.
However, this method means you might need to write your own parser for complex configurations, whereas Telemetry Pipeline has built-in parsers for managing complex configurations. See Configure Core Agent for more information.
Configure Core Agent
Core Agent can be an agent that runs in your environment, a data collector, or serves both of these purposes. In this configuration, you run Core Agent as an agent that collects data from your app, parses that data, and sends the data directly to Observability Platform.
Complete the following steps to ingest data with Core Agent:
-
Create a configuration file (opens in a new tab) to define your services. See the example configuration file for more information.
Alternatively, you can create a YAML configuration file (opens in a new tab). See this example (opens in a new tab) for reference.
-
Optional: Add variables (opens in a new tab) or commands (opens in a new tab) to enhance your configuration file.
-
Add inputs (opens in a new tab) to your configuration file.
-
Create a
parsers.conf
configuration file to define which parser to use. The built-in parsers (opens in a new tab) cover most use cases. -
Optional: Add filters (opens in a new tab) to your configuration file to enrich your data.
-
Define the output (opens in a new tab) destination for your data, which is your Observability Platform tenant. The full URL is:
https://ADDRESS.ingest.logs.chronosphere.io/
Replace
ADDRESS
with your company name prefixed to your Observability Platform instance that ends iningest.logs.chronosphere.io
. For example,MY_COMPANY
ingest.logs.chronosphere.io
.
Telemetry Pipeline
While you can run Core Agent on its own, you can also have it send data to Telemetry Pipeline to do your parsing there. Use this deployment method if:
- You want a graphical interface to manage your agents and pipeline configurations, rather than using YAML-based configuration files.
- You want the ability to run sample actions in the pipeline to preview your data transformations before applying the changes.
This method adds Telemetry Pipeline as another component in your ingestion pipeline. However, previewing your transformations means you can safely modify the parsing logic in your pipeline before making changes to your data. See Configure Core Agent plus Telemetry Pipeline for more information.
Configure Core Agent with Telemetry Pipeline
In this configuration, you run Core Agent as an agent in your environment that collects data and sends it to Telemetry Pipeline for processing. You parse your data in Telemetry Pipeline, and then send the processed data to Observability Platform. You can manage your Core Agent in Telemetry Pipeline.
Complete the following steps in Core Agent:
-
Create a configuration file (opens in a new tab) to define your services.
Alternatively, you can create a YAML configuration file (opens in a new tab). See this example (opens in a new tab) for reference.
-
Add inputs (opens in a new tab) to your configuration file.
-
Define the output (opens in a new tab) destination for your data, which is Telemetry Pipeline.
Complete the following steps in Telemetry Pipeline:
- Create an ingest pipeline to read data from your application. You can transform, drop, and route data in a pipeline.
- Define a secret for your pipeline.
- Optional: Define a parser to determine which fields are extracted during ingest.
- Add processing rules to your ingest pipeline.
Configuration file example
The main Telemetry Pipeline configuration file supports these section types:
SERVICE
INPUT
FILTER
OUTPUT
A section can contain individual entries, which are defined by a line of text that contains both a key and a value.
See the Configuration file (opens in a new tab) page in the Core Agent documentation for more information.
Use the following example as a model for creating your own configuration file. This
example assumes your logs are in JSON format, as indicated by the
/api/v1/ingest/json
value for the URI
key.
- Replace
ADDRESS
with your company name prefixed to your Observability Platform instance that ends iningest.logs.chronosphere.io
. For example,MY_COMPANY
.ingest.logs.chronosphere.io
. - Replace
API_TOKEN
with the API ingest token you created in LogScale.
[SERVICE]
Daemon Off
Flush 1
Log_Level info
Parsers_File /fluent-bit/etc/parsers.conf
Parsers_File /fluent-bit/etc/conf/custom_parsers.conf
HTTP_Server On
HTTP_Listen 0.0.0.0
HTTP_Port 2020
Health_Check On
[INPUT]
Name tail
Path /var/log/containers/*.log
multiline.parser docker, cri
Tag kube.*
Mem_Buf_Limit 5MB
Skip_Long_Lines On
[INPUT]
Name systemd
Tag host.*
Systemd_Filter _SYSTEMD_UNIT=kubelet.service
Read_From_Tail On
[FILTER]
Name kubernetes
Match kube.*
Merge_Log On
Merge_Log_Key On
Keep_Log Off
K8S-Logging.Parser On
K8S-Logging.Exclude On
[OUTPUT]
Name http
Match *
Host ADDRESS.ingest.logs.chronosphere.io
Port 443
URI /api/v1/ingest/json
Header Authorization Bearer $API_TOKEN
tls On
tls.verify On
compress On
format json
json_date_key @timestamp
json_date_format iso8601