Scrape configuration using Kubernetes

When using Kubernetes discovery, the Collector determines the options for scraping endpoints based on this order:

  1. Default global scrape configuration
  2. Global scrape configuration
  3. Kubernetes annotations
  4. Job configuration

Any settings made by an earlier configuration are overwritten by a later configuration. For example, the default scrape interval as set in the default global scrape configuration is ten seconds (10s). You can override this interval by setting a scrape configuration, adding an annotation to a Kubernetes object, or adding a job configuration.

Global scrape configuration

Defining the following default scrape configuration settings applies those settings globally to all jobs:

scrape:
 defaults:
    metricsPath: "/metrics"
    scheme: "http"
    scrapeInterval: 10s
    scrapeTimeout: 10s
    honorLabels: false
    honorTimestamps: true
    relabels: <nil>
    metricRelabels: <nil>

You can override the global scrape configuration to set options for your instance. If you don't specify a field, it uses the default values.

The Prometheus relabel_config (opens in a new tab) configuration applies to relabels and metricRelabels parameters. This configuration allows advanced modifications to any target and its labels before ingesting metrics.

Kubernetes annotations

You can use Kubernetes annotations to override any global defaults. The Collector supports the following Prometheus annotations to enable scraping for a pod. By default, prometheus.io prefixes each of these annotations, such as prometheus.io/scrape. You can change the annotation prefix to a different value.

  • /scrape: Determines whether to enable scraping. Excludes scraping the pod if set to false.
  • /port: Port to scrape on the pod.
  • /path: Metrics path for the pod.
  • /params: HTTP URL parameters.
  • /scheme: Connection format, either http or https.
  • /job: Used to override the value of the job label.
  • /collectionInterval: How often to scrape endpoints. Default: 10s.
  • /collectionTimeout: Timeout for a scrape operation.
  • /serviceAccountBearerToken: Token value if using service account authentication.
  • /httpProxyURL: Proxy URL of connections to this pod.
  • /tlsServerName: Host name when using TLS authentication.
  • /tlsInsecureSkipVerify: Determines whether to verify a server's certificate. Set to true to not verify the server's certificate for validity.

The following annotations can be used on the scraped pod. When in use, they also must have an environment variable or file pointing to files/env vars on the Collector pod.

  • /httpBasicAuthUsernameEnvVar: Username environment variable when using basic authentication.
  • /httpBasicAuthPasswordEnvVar: Password environment variable when using basic authentication.
  • /httpBasicAuthPasswordFile: Path to file containing a password when using basic authentication.
  • /httpBearerTokenEnvVar: Token environment variable when using HTTP bearer token authentication.
  • /httpBearerTokenFile: Path to file containing a bearer token when using HTTP bearer token authentication.
  • /tlsCAFile: Path to CA certificate when using TLS authentication.
  • /tlsCertFile: Path to personal certificate when using TLS authentication.
  • /tlsKeyFile: Path to private key when using TLS authentication.

For example,

The Kubernetes pod to be scraped includes the following annotations:

prometheus.io/scrape=true
prometheus.io/port=8090
prometheus.io/httpBasicAuthUsernameEnvVar=MY_USERNAME
httpBasicAuthPasswordEnvVar=MY_PASSWORD

The Collector pod needs these environment variables defined:

MY_USERNAME=admin
MY_PASSWORD=mypassword

Change annotation prefix

You can change the annotation prefix under the kubernetes > processor > annotationsPrefix YAML collection by setting the environment variable KUBERNETES_PROCESSOR_ANNOTATIONS_PREFIX. The default is prometheus.io/.

kubernetes:
  ...
  # processor defines configuration for processing pods discovered on Kubernetes.
  processor:
    # annotationsPrefix is the prefix for annotations that the Collector uses to scrape discovered pods.
    annotationsPrefix: ${KUBERNETES_PROCESSOR_ANNOTATIONS_PREFIX:"prometheus.io/"}

If you modify a Collector manifest, you must update it in the cluster and restart the Collector.

Scrape multiple ports

The prometheus.io/port annotation can take a comma-separated list to indicate the ports to scrape. For example, prometheus.io/port: 1234,5678.

The endpoints both need to have the same path. For example both 1234 and 5678 ports must expose Prometheus metrics on the /metrics path.

If specifying multiple ports, the pods must expose all ports. The following is an example of a container ports configuration in a pod manifest exposing both 1234 and 5678:

ports:
- name: metrics
  containerPort: 1234
- name: other_metrics
  containerPort: 5678

Define jobs configuration

When using annotations for scraping, you can set the jobs configuration to override any of the defaults set in the global scrape configuration.

Here are example values for the jobs configuration:

jobs:
  - name: "foo"
    options:
       metricsPath: PATH
       params: STRING
       scheme: SCHEME
       scrapeInterval: SCRAPE_INTERVAL
       scrapeTimeout: SCRAPE_TIMEOUT
       honorLabels: BOOLEAN
       honorTimestamps: BOOLEAN
       relabels: RELABEL_CONFIG
       metricRelabels: RELABEL_CONFIG

The name value must be unique across all scrape configurations, and must match the job name set in the annotations. For example, prometheus.io/job.

For example, to change the scrapeInterval to 1m and create a relabeling rule that changes the name of the label code to status_code for the api job, add the following configuration to the config.yml field of the chronocollector-config ConfigMap:

jobs:
  - name: api
    options:
      scrapeInterval: 1m
      relabels:
        - action: replace
          regex: (.*)
          replacement: $1
          sourceLabels:
            - code
          targetLabel: status_code

If you modify a Collector manifest, you must update it in the cluster and restart the Collector.

Differences between relabel and metricRelabels

A Prometheus relabel configuration lets you select which targets you want scraped, and what the target labels are. Using relabel rewrites the label set of a target before it's scraped.

Chronosphere Collector applies metricRelabels after the scrape, but before ingesting the data. If there are expensive metrics that you want to drop, or labels coming from the scrape itself that you want to manipulate, then use metricRelabels.

In this example, the relabel rule replaces the name of the label code to status_code for the api job before scraping the metrics:

jobs:
  - name: api
    options:
      scrapeInterval: 1m
      relabels:
        - action: replace
          regex: (.*)
          replacement: $1
          sourceLabels:
            - code
          targetLabel: status_code

The following metricsRelabel rule replaces the pod and job labels for the cadvisor job on promremotebench-0 to database:

jobs:
  - name: cadvisor
    options:
      metricRelabels:
        - action: replace
          regex: promremotebench-0;(.*)
          replacement: $1
          sourceLabels:
            - pod
            - job
          targetLabel: database

Environment variable expansions and Prometheus relabel rule (opens in a new tab) regular expression capture group references can use the same syntax. For example, ${1} is valid in both contexts.

If your relabel configuration uses Prometheus relabel rule regular expression capture group references, and they are in the ${1} format, escape the syntax by adding an extra $ character to the expression such as $${1}.

metricRelabels actions

The following actions are available for metricRelabels:

  • replace: Match a regular expression against the concatenated sourceLabels. Then, set targetLabel to replacement, with match group references (such as ${1} or ${2}) in the replacement substituted by their value. If the regular expression doesn't match, no replacement occurs.

  • keep: Drop targets for which a regular expression doesn't match the concatenated sourceLabels.

    jobs:
        - name: kubelet
          options:
            metricRelabels:
              - action: keep
                regex: (?i)(kubelet_volume_stats_available_bytes|kubelet_volume_stats_capacity_bytes)
                sourceLabels: [__name__]
  • drop: Drop targets for which the regex matches the concatenated sourceLabels.

    This example shows an example of using relabels to drop the __name__ on cadvisor metrics:

    jobs:
        - name: cadvisor
          options:
            relabels:
              - action: drop
                regex: (.*)
                sourceLabels: [__name__]
  • labelmap: Match a regular expression against all label names. Then, copy the values of the matching labels to the label names given by the replacement with match group references (such as ${1} or ${2}) in the replacement substituted by their value.

    The following example uses labelmap to copy all labels containing __meta_kubernetes_service_label_ and keep only a portion:

    - action: labelmap
      regex: __meta_kubernetes_service_label_(.+)

    An example of the result is changing __meta_kubernetes_service_label_app='api' to app='api'.

  • labeldrop: Match a regular expression against all label names and remove any label that from the set of labels.

    This example matches all label names that contain container_label_com_amazonaws_ecs_task_arn:

    metricRelabels:
      - regex: 'container_label_com_amazonaws_ecs_task_arn'
        action: labeldrop
  • labelkeep: Match a regular expression against all label names and remove any label that doesn't match from the set of labels.

    This example code drops all label names that don't match job:

    metricRelabels:
      - regex: 'job'
        action: labelkeep