Verify the Collector is scraping metrics

Use the following methods to ensure the Collector is scraping the intended metrics.

Contact Chronosphere Support if there are other Collector statistics you want to access. Similarly, contact Chronosphere Support to request having specific telemetry data removed from Chronosphere.

Live Telemetry Analyzer

Live Telemetry Analyzer lets you inspect, in real time, the stream of metrics that Chronosphere is ingesting. You can use the Metrics Profiler to verify the Collector's connection to your Chronosphere app by viewing the metrics that the Collector emits about itself.

  1. Click Go to Admin.

  2. In the navigation menu select Analyzers > Live Telemetry Analyzer, and then click the Metrics tab.

  3. Click Live to display streaming metrics.

  4. In the Keys list, click the __name__ and instance label keys.

  5. In the Values filter, enter the following key:value pairs:


    Replace INSTANCE_NAME with the host and port where the Collector is running. For example, a Collector running locally uses an instance name of localhost:3030.

    In the Values list, the displayed metrics include your Collector instance in the instance column.

Metrics Explorer

Metrics Explorer lets you validate metrics if you know the name of the metric or label you're searching for.

  1. In the navigation menu select Explorers > Metrics Explorer.

  2. Enter the following query in the query field.

    count(chronocollector_jobs) by (instance)
  3. Click Run query.

The name of your Collector instance returned from the kubectl logs command displays in the table of metrics:

2023-02-03 10:56:04default/chronocollector-POD_NAME

POD_NAME is the name of the Kubernetes pod where your Collector instance is running.


Chronosphere includes a Collectors dashboard by default that is actively maintained and updated. This dashboard displays information about the metrics the Collector scrapes. When the Collector begins receiving metrics, the Collector dashboard panels populate and display statistics such as:

  • Number of Collectors running on the cluster
  • Number of metrics scraped per second
  • Number of scrape targets per job
  • Memory and CPU consumption
  • Push latency to Chronosphere

Creating monitors based on some of these key metrics can help you detect if a Collector is performing poorly.

Refer to the default dashboard metrics for the full list of available metrics.

In the navigation menu select Dashboards to access the Collector dashboard.

  • If you're using Kubernetes discovery, the Collector configuration includes an annotation for the Collector dashboard by default.

  • If you're using Prometheus discovery and don't see the Collector dashboard, ensure the following structure exists in your Collector configuration file under the discovery section:

        - job_name: 'Collector'
          scrape_interval: 15s
          scrape_timeout: 30s
            - targets: ['']