TELEMETRY PIPELINE
Concepts

Telemetry Pipeline concepts

Chronosphere Telemetry Pipeline includes several distinct components, utilities, and features that combine to form a unified product. This guide describes some of the most common concepts you'll encounter while using Telemetry Pipeline.

Pipeline

A pipeline, also known as a telemetry pipeline, is a data management tool for collecting, transforming, and routing telemetry data. Pipelines support all telemetry types, including logs, metrics, and traces.

Each pipeline must include at least one source plugin and one destination plugin. You can also add parsers and processing rules to each pipeline, but these are optional.

Pipelines rely on Kubernetes custom resource definitions (CRDs), but you don't need to directly interact with these CRDs to use Telemetry Pipeline. Instead, you can use the Telemetry Pipeline web interface or Pipeline CLI to configure your Core Operators and Core Instances, which make changes to those resources on your behalf.

Plugin

A plugin is an extension that connects your pipeline to other tools or platforms. These plugins are located at the beginning and end of each pipeline, and create the openings through which your data flows.

Source plugin

A source plugin, also known as an input plugin, is an opening that lets telemetry data enter your pipeline. These plugins are located at the beginning of each pipeline.

Pipelines can include one or more source plugins, including plugins that correspond to different tools or platforms. This makes it possible to feed data from multiple sources into the same pipeline.

Destination plugin

A destination plugin, also known as an output plugin, is an opening that lets telemetry data exit your pipeline. These plugins are located at the end of each pipeline.

Pipelines can include one or more destination plugins, including plugins that correspond to different tools or platforms. This makes it possible to collect data within a single pipeline, then send that data to multiple destinations.

Parser

A parser is a tool that turns unstructured data into structured data.

These parsers are applied at the source plugin level, which means that any data that passes through an applicable source plugin is parsed before it enters your pipeline.

Processing rule

A processing rule is a tool that transforms data as it passes through your telemetry pipeline. You can use multiple processing rules within the same pipeline to perform complex operations.

Chronosphere offers a variety of predefined processing rules with configurable settings, each designed to perform a particular action. You can also create your own processing rules by writing custom Lua scripts.

Any active processing rules are applied to data after it enters your pipeline through a source plugin, but before it leaves your pipeline through a destination plugin.

Core Operators and Core Instances

In Chronosphere Telemetry Pipeline, pipelines are self-contained Kubernetes entities. This is true of all pipelines, regardless of where they're deployed.

Pipelines deployed within a Kubernetes cluster can take advantage of built-in Kubernetes features. For pipelines in a Linux environment, Telemetry Pipeline first creates a K3s (opens in a new tab) cluster in that environment, then installs components and resources within that cluster.

The underlying component that creates and manages each Telemetry Pipeline installation is called a Core Operator. A Core Operator oversees one or more Core Instances, which are components that group pipelines and sync their status with the Telemetry Pipeline backend. You can interact with these components to configure pipelines' behavior.

For a diagram that outlines how these resources interact, see Telemetry Pipeline architecture.

Core Operator

A Core Operator is a Kubernetes operator (opens in a new tab) for managing Telemetry Pipeline resources. After you install a Core Operator, that Core Operator registers itself with the Kubernetes API. That Core Operator then watches for changes from any Core Instances in your Kubernetes or K3s cluster, and creates or modifies resources as necessary. Each Core Operator can manage multiple Core Instances across any Kubernetes namespace.

You must install a Core Operator in each Kubernetes cluster or Linux environment where you want to use Telemetry Pipeline. If you have multiple clusters or environments, you'll need to install multiple Core Operators.

Core Instance

A Core Instance is a functional groupings of pipelines. Each Core Instance provides you with the tools to manage its associated pipelines, and each Core Instance is managed by its corresponding Core Operator. When you create a Core Instance, Telemetry Pipeline adds it to your Kubernetes or K3s cluster within a specified namespace.

You must create at least one Core Instance for each Core Operator you install. You can also create more than one Core Instance within a single Core Operator.

For every pipeline you create, that pipeline's associated Core Instance performs the following actions:

  • Syncs data about that pipeline with the Telemetry Pipeline backend.
  • Acknowledges any changes you make to that pipeline.
  • Prompts the corresponding Core Operator to create and update the necessary resources for that pipeline, including Deployments, Services, and Secrets.

Fleet

A fleet is a collection of multiple agents that are governed by a shared set of configuration settings. This grouping makes it possible to control a large number of distributed agents from a single, centralized location.

Fleets are a feature of Chronosphere Telemetry Pipeline, but aren't pipelines themselves. Fleets and their constituent agents also follow a unique installation process that doesn't involve Core Operators or Core Instances.

Agent

An agent is a standalone tool for collecting telemetry data. These agents are functionally similar to pipelines, but they're not identical, and each tool has its own trade-offs. It's also possible to send data from an agent to a pipeline.

Agents can take advantage of parsers, but not processing rules. Additionally, agents use a different set of plugins than pipelines do.