Monitor kubelet or cAdvisor metrics
If running in Kubernetes, you can configure the Collector to scrape kubelet or cAdvisor metrics by setting thekubeletMetricsEnabled or cadvisorMetricsEnabled
flag to true under the kubeletMonitoring YAML collection.
For example:
port: The port that thekubeletis running on. Default:10250.bearerTokenFile: The path to the file containing Collector’s service account token. Default:"/var/run/secrets/kubernetes.io/serviceaccount/token".kubeletMetricsEnabled: Enables scraping kubelet metrics. Default:false.cadvisorMetricsEnabled: Enables scraping cAdvisor metrics. Default:false.probesMetricsEnabled: Enables collecting metrics on the status of liveness, readiness, and startup kubelet probes for Kubernetes containers. Default:false.labelsToAugment: Lists the metadata labels from pod labels that the Collector adds to metrics.annotationsToAugment: Lists the metadata labels from pod annotations the Collector adds to metrics.
Add metadata labels from pod labels
By default, container-level metrics don’t include metadata labels likeservice or
app, which searches can include when querying for these metrics. To automatically
add these labels from pod labels, use the labelsToAugment flag to list the labels
that Collector adds to the metrics.
For example, to add the app label to the container level metrics for a node-exporter
DaemonSet deployment, use the following configuration under the kubeletMonitoring
key:
app="node-exporter" to these metrics, based on the following example
node-exporter manifest:
Add metadata labels from pod annotations
By default, container-level metrics don’t include metadata labels, which searches can include when querying for these metrics. To automatically add these labels from pod annotations, use theannotationsToAugment flag to list the labels the Collector
adds to the metrics.
For example, to add the app_kubernetes_io_component label to the container-level
metrics for a node-exporter DaemonSet deployment, use the following configuration
under the kubeletMonitoring key:
app_kubernetes_io_component="infrastructure" to these metrics, assuming
the following example node-exporter manifest:
Map Kubernetes labels to Prometheus labels
Requires Chronosphere Collector version 0.93.0 or later.The Collector lets you specify pod labels and annotations you want to keep as a Prometheus label. This feature applies to pods only. The following configuration example converts all pod labels called
my_label and all
pod annotations called my.pod.annotation into Prometheus labels for the metrics
scraped from discovered pods. This is equivalent to a Prometheus labelmap rule, but
sanitizes the label names and values:
Discover kube-system endpoints
Requires Chronosphere Collector version 0.85.0 or later. Requires Kubernetes version 1.19.0 or later for your Kubernetes cluster.To discover endpoints in the
kube-system namespace, set the
kubeSystemEndpointsDiscoveryEnabled flag to true. Because kube-system has many
constantly changing endpoints that might cause unnecessary load on the Collector, the
endpoint is disabled by default.
Using EndpointSlices significantly reduces the amount of load on the Kubernetes API
server.
If you modify a Collector manifest, you must
update it in the cluster and restart the Collector.
Discover and scrape kube-state-metrics
You can use ServiceMonitors to scrape kube-state-metrics, which generate metrics
that track the health of deployments, nodes, and pods in a Kubernetes cluster.
Monitoring these metrics can help to ensure the health of your cluster because the
Collector expects to continually receive kube-state-metrics. If the Collector can’t
scrape these metrics, it’s likely your Kubernetes cluster is experiencing issues you
need to resolve.
Monitoring kube-state-metrics with a DaemonSet Collector is manageable for smaller
clusters, but can lead to out of memory (OOM) errors as the cluster scales.
Chronosphere recommends running the Collector as a sidecar to take advantage of
staleness markers. The following steps assume that:
- You’re running a separate Collector as a Deployment to monitor
kube-state-metrics. - You’ve already defined a Kubernetes Service and ServiceMonitor for
kube-state-metrics.
ServiceMonitors CRD, complete the following steps to discover
kube-state-metrics:
- Download this manifest.
-
In the
datasection, replace the values foraddressandapi-tokenwith your Base64-encoded API token: -
Apply the manifest:
-
Confirm the
Deploymentis started and running, and view the logs of the pod.-
Use
kubectl get podsto list the pods: -
In the output, identify the pod to examine by its
NAMEcolumn value. For example, the pod name in the following output ischronocollector-jtgfw. -
Use
kubectl logsto review the pod’s logs:
-
Use
Ingest Kubernetes API server metrics
The Kubernetes API Server provides REST operations and a frontend to a cluster’s shared state through which all other components interact. Unlike most other metrics emitted from a cluster, Kubernetes doesn’t expose API Server metrics by using a pod, but instead exposes metrics directly from an endpoint in the API Server. To ingest these metrics through traditional service discovery methods, you must discover and scrape the endpoints directly. The Collector supports usingServiceMonitors or job service discovery.
Discover API Server metrics with ServiceMonitors
Discover metrics for a managed Kubernetes cluster
To use ServiceMonitors to discover and scrape API server metrics from a managed Kubernetes cluster, such as Amazon Elastic Kubernetes Service (EKS) or Google Kubernetes Engine (GKE), enable both theallowSkipPodInfo flag under the top level serviceMonitor key and the
endpointsDiscoveryEnabled flag under the discovery.kubernetes YAML collection
in the Collector configuration.
Discover metrics for a self-managed Kubernetes cluster
To use ServiceMonitors to discover and scrape API server metrics from a self-managed Kubernetes cluster, such as k0ps, enable theendpointsDiscoveryEnabled flag under the discovery.kubernetes YAML
collection in the Collector configuration:
In this configuration, you can deploy the Collector as a DaemonSet, which installs
the Collector on the master nodes that run the Kubernetes API Server so the Collector
can scrape it.
Discover API Server metrics with the jobs service
To discover and scrape API Server metrics without usingServiceMonitors, you can
use the jobs section of the Collector configuration for service discovery.
The following is an example that discovers the API Server based on the value of the
__meta_kubernetes_pod_label_k8s_app label equal to kube-apiserver (found in the
API Server Service object).