Processing rules

Processing rules

Processing rules transform data as it passes through your telemetry pipeline. You can use a variety of rules to perform different actions on data after it leaves its source but before it reaches its destination.

Some example use cases for processing rules include:

  • Adding a new field to each log for easier debugging and troubleshooting.
  • Redacting sensitive values to preserve user privacy.
  • Removing unnecessary fields to improve your data's signal-to-noise ratio.
  • Converting data from one format to another.
  • Turning unstructured data into structured data.
  • Aggregating logs into metrics to reduce data volume while retaining key insights.

How processing rules work

Each built-in processing rule performs a specific action, like hashing the value of a field or changing the name of a key. Additionally, the Custom Lua rule lets you write scripts that perform custom actions. By using more than one processing rule together, you can create a complex sequence of transformations that's suited for your data and application.

Most processing rules are compatible with most data formats. Processing rules are also designed to skip logs they're incompatible with rather than displaying an error message, which means you can use rules that apply only to certain chunks of data. For example, you can apply a broad processing rule to remove a certain field even if some logs in your pipeline don't contain that field.

Format

A processing rule consists of input data, an action, and settings with defaults. Processing rules are run one at a time, from top to bottom. If you add multiple processing rules to the same pipeline for the same telemetry type, the output from your first rule becomes the input for your second rule, the output from your second rule becomes the input for your third rule, and so on.

Telemetry types

Requires Core Instance version 2.14.0 or later and pipeline version 24.6.9 or later.

Processing rules support logs, metrics, and traces. You can create processing rules for multiple telemetry types within the same pipeline, but each processing rule is applied only to its specified telemetry type. For example, if you create a search/replace value processing rule for metrics, this rule won't affect any logs or traces that pass through your pipeline, even if those logs or traces contain a matching key.

When raw log data passes through at least one processing rule, the data receives a new log field for each event. This log field lets you treat each event as a single unit of data.

Structured log data such as JSON doesn't receive a log field because you can already break structured data into discrete events.

Regex engines

Requires Calyptia Core version 2.9.0 or later.

Some processing rules, like Block Record and Rename Keys, accept regular expressions. For most of these rules, you can specify one of the following regex engines to parse your rule:

Record accessor syntax

If your raw data is in JSON format, you can use record accessor syntax to extract nested fields within a larger JSON object.

To extract a nested field inside a standard object, use the following syntax:

$objectName.fieldName

To extract a nested field inside an array, use the following syntax:

$objectName.arrayName[X]

For example, given the following JSON object:

    "log": "1234",
    "kubernetes": {
        "pod_name": "mypod-0",
        "labels": [
            {
                "app": "myapp"
            }
        ],
    }
}

The expression $kubernetes.pod_name resolves to mypod-0, and the expression $kubernetes.labels[0] resolves to "app": "myapp".

Processing rules playground

The Calyptia Dashboard has a processing rules playground (opens in a new tab) you can use to test and troubleshoot processing rules, including custom Lua scripts.

For safety reasons, this playground environment is isolated from the internet and has no access to internal Calyptia resources. Any processing rules you test here won't affect your pipeline, clusters, or logging data.

Add processing rules to your pipeline

  1. Sign in to the Calyptia Dashboard (opens in a new tab).

  2. Navigate to Core Instances, then select the pipeline to which you'd like to add a new processing rule.

  3. Click Edit.

  4. Click the node in the middle of the configuration diagram.

  5. In the dialog that appears, select an option from the Telemetry type tab.

  6. Click Add new action to open the processing rules menu.

  7. Select a processing rule from the available list.

  8. Configure the available settings for that processing rule, and then click Apply.

  9. Optional: Repeat steps 5 through 8 to add additional processing rules. If you add multiple rules, you can drag them to change the order in which they run.

  10. Optional: Add test input and then click Run actions to preview the output of your processing rules.

  11. Click Apply processing rules to finalize your processing rules, and then click Save and deploy to save your pipeline settings.

Use the toggle next to a processing rule to enable or disable that rule.