- ServiceMonitors (each node)
- Kubernetes annotations (each node)
- Prometheus service discovery (per cluster)
ServiceMonitors or Kubernetes annotations (or a combination of both) are
recommended for most deployments.
Use push-based collection mechanisms for use cases where jobs can’t be scraped
automatically, such as AWS Lambda, Google Cloud Functions, or ephemeral batch jobs.
ServiceMonitors
ServiceMonitors are a custom resource definition (CRD) you can use to define scrape
configurations and options in a separate Kubernetes resource.
Discovery is scoped to the targets on the local node by default, which requires you
to deploy the Collector as a DaemonSet
for this method of service discovery.
Prerequisites
Run the following command to install theServiceMonitor CRD
from the full Prometheus Operator, using the file in the kube-prometheus-stack
Helm chart:
Chronosphere supports only fields in version 0.44.1 of the Prometheus Operator.
Enable ServiceMonitor discovery
To enable ServiceMonitor discovery in the Collector, make the following configuration changes:-
Add the following options after the
ClusterRoleresource in the manifest underrules: -
Enable the
ServiceMonitorsfeature of the Collector by setting the following keys totruein the manifestConfigMapunder thediscovery>kuberneteskey:-
serviceMonitorsEnabled: Indicates whether to use ServiceMonitors to generate job configurations. -
endpointsDiscoveryEnabled: Determines whether to discover Endpoints. RequiresserviceMonitorsEnabledto be set totrue. -
useEndpointSlices: Use EndpointSlices instead of Endpoints. RequiresserviceMonitorsEnabledandendpointsDiscoveryEnabledto be set totrue. EndpointSlices use less resources than Endpoints.EndpointSlices are available with Chronosphere Collector v0.85.0 or later and Kubernetes v1.21 or later. Chronosphere Collector v0.104.0 and later exclusively use EndpointSlices and ignore theuseEndpointSlicessetting. -
podMatchingStrategy: Determines how to use ServiceMonitors and annotations when discovering targets. Accepts the following settings forVALUE:all: Allows any and all scrape jobs to be registered for a single pod.annotations_first: Matches annotations first. If no matches return, then other matching can occur.service_monitors_first: Matches ServiceMonitors first. If no matches return, then other matching can occur.service_monitors_only: Matches ServiceMonitors only.
-
additionalPodMatching: An array that configures the discovery of additional Pods in addition to those on the same Node as the Chronosphere Collector Pod, such as when a set of other Pods is divided among the set of Chronosphere Collector DaemonSet Pods. For example, you can configure this to discover and scrape Pods that represent workloads within a Virtual Kubelet whose host Pod is on the same Node as the Collector Pod. Each item in theadditionalPodMatchingarray must specify alabelSelectororfieldSelectorto filter which Pods to watch.Additional Pod matching requires Chronosphere Collector v0.113.0 or later.labelSelector: Matches Pods where a label value matches the name of the Node on which the Collector Pod DaemonSet instance is running. Define the selector value to match additional Pods to be selected.activationRule: Adds an activation rule toadditionalPodMatchingto dynamically start and stop Kubernetes API watches based on the presence of Pods running on the same Node that match thepodLabelSelector. If a given Collector instance identifies a Pod on its respective Node that matches the activation rule’s label selector, that Collector will enable the additional watch stream. If that Pod ceases to exist, the Collector will also stop the additional watch stream.
-
Pod-based ServiceMonitor discovery
If you use a version of Kubernetes that doesn’t support endpoint slices, you can setendpointsDiscoveryEnabled to false to run the Collector in a mode that doesn’t
discover Kubernetes endpoint slices or service resources.
In this mode, the Collector can still discover scrape targets using ServiceMonitors
under specific circumstances depending on the Kubernetes resource configuration. The
Collector uses the Pod’s labels as the Service’s labels. If the Pod’s labels match
the Service’s labels a ServiceMonitor that uses targetPort (container port) to
indicate the port to scrapes.
Run as a DaemonSet with ServiceMonitors
If you want to run the Collector as a DaemonSet and scrapekube-state-metrics
through a Collector running as a Deployment, you need to update the manifest for both
Collector instances.
In your DaemonSet, add the serviceMonitor > serviceMonitorSelector key to your
manifest and define the following matchExpressions to ensure that your DaemonSet
only matches on ServiceMonitors that don’t contain kube-state-metrics:
operator value of the matchExpressions attribute to In. This setting ensures
that your Deployment only matches on ServiceMonitors that contain
kube-state-metrics:
Match specific ServiceMonitors
By default, the Collector ingests metrics from allServiceMonitor sources. To match
specific instances, use a series of AND match rules under the
serviceMonitor > serviceMonitorSelector key and set the matchAll under the
serviceMonitorSelector key to false.
-
matchLabelsRegexp: Labels and a regular expression to match a value. For example: -
matchLabels: Labels and a matching value. For example: -
matchExpressions: Depending on the operator set, labels that exist or don’t exist, or have or don’t have specific values. For example:-
To match
ServiceMonitorsthat have theexamplelabelwith valuesaorbuse theInoperator: -
To match
ServiceMonitorsthat have theexamplelabelwithout valuesaorb, use theNotInoperator. TheNotInoperator also matches anyServiceMonitorswithout theexamplelabelpresent: -
To match
ServiceMonitorsthat have theexamplelabelwith any value, use theExistsoperator: -
To match
ServiceMonitorsthat don’t have theexamplelabel, use theDoesNotExistoperator:
-
To match
Match endpoints without pods using ServiceMonitors
The default Collector configuration isn’t suitable if you want to discover endpoints but lack access to Pod information. For example, if you want to:- Monitor the Kubernetes API server, which doesn’t run on the same node as Kubernetes workloads.
- Monitor endpoints that can be running anywhere in the cluster, but without using a Collector running as a DaemonSet.
- Discover and scrape
kube-state-metrics, which listen to the Kubernetes API server and generate metrics about deployments, nodes, and pods.
If you’re monitoring endpoints but don’t have access to Pod information, the
ServiceMonitor can’t use the TargetPort attribute to target the endpoint and
must instead use the Port attribute.allowSkipPodInfo attribute to true.
Kubernetes annotations
Discovery is scoped to the targets on the local node by default, which requires you
to deploy the Collector as a DaemonSet
for this method of service discovery.
annotations on each Pod in the cluster:
/metrics endpoint
on port 9100.
prometheus.io/, from the
kubernetes > processor section of the Collector ConfigMap.
After any changes, send the updated manifest to the cluster with the following
command:
If you modify a Collector manifest, you must
update it in the cluster and restart the Collector.
Prometheus service discovery
If using Prometheus service discovery within Kubernetes,
deploy a single Collector as a Kubernetes Deployment
per cluster. This is to avoid every Collector instance duplicating scrapes to all
endpoints defined in the Prometheus service discovery configuration.
discovery.prometheus.enabled to true
in the Collector config. Provide the list of scrape configs in the
discovery.prometheus.scrape_configs section. The following example uses the
kubernetes_sd_config.
Set the Collector to scrape its own metrics
For the Collector to scrape its own metrics, add another job to thediscovery.prometheus.scrape_configs key: