Verify the Collector is scraping metrics
Use the following methods to ensure the Collector is scraping the intended metrics.
Contact your Technical Account Manager (TAM) or ask in your support Slack channel if there are other Collector statistics you want to access. Similarly, contact your TAM to request having specific telemetry data removed from Chronosphere.
Live Telemetry Analyzer lets you inspect, in real time, the stream of metrics that Chronosphere is ingesting. You can use the Metrics Profiler to verify the Collector's connection to your Chronosphere app by viewing the metrics that the Collector emits about itself.
In the navigation menu select Exploring > Live Telemetry Analyzer.
Click Live to display streaming metrics.
In the Keys list, click the
__name__and instance label keys.
In the Values filter, enter the following key:value pairs:
INSTANCE_NAME: the host and port where the Collector is running. For example, a Collector running locally uses an instance name of
In the Values list, the displayed metrics include your Collector instance in the instance column.
Metrics Explorer lets you validate metrics if you know the name of the metric or label you're searching for.
In the navigation menu select Exploring > Metrics Explorer.
Enter the following query in the query field.
count(chronocollector_jobs) by (instance)
Click Run query.
The name of your Collector instance returned from the
kubectl logs command
displays in the table of metrics:
POD_NAME: the name of the Kubernetes pod where your Collector instance is
Chronosphere includes a Collectors dashboard by default that is actively maintained and updated. This dashboard displays information about the metrics the Collector scrapes. When the Collector begins receiving metrics, the Collector dashboard panels populate and display statistics such as:
- Number of Collectors running on the cluster
- Number of metrics scraped per second
- Number of scrape targets per job
- Memory and CPU consumption
- Push latency to Chronosphere
Creating monitors based on some of these key metrics can help you detect if a Collector is performing poorly.
Refer to the default dashboard metrics for the full list of available metrics.
Go to Dashboards to access the Collector dashboard.
If you're using Kubernetes discovery, the Collector configuration includes an annotation for the Collector dashboard by default.
If you're using Prometheus discovery and don't see the Collector dashboard, ensure the following structure exists in your Collector configuration file under the
discovery: prometheus: - job_name: 'Collector' scrape_interval: 15s scrape_timeout: 30s static_configs: - targets: ['0.0.0.0:3030']