Troubleshoot Core Operator
Depending on your installation, you can use Chronosphere Telemetry Pipeline, Helm, or kubectl's command-line tools to diagnose and resolve issues with your Core Operator and Instance installation.
Troubleshooting Core Operator
If you use Pipeline CLI to install Core Operator and get the following error message, you're probably using an older version of Pipeline CLI:
calyptia: /lib/x86_64-linux-gnu/libc.so.6: version `GLIBC_2.34' not found (required by calyptia)
calyptia: /lib/x86_64-linux-gnu/libc.so.6: version `GLIBC_2.32' not found (required by calyptia)
Core Operator operations can only be carried out with Pipeline CLI version 1.4.7 or later. You will need to upgrade Pipeline CLI.
Verifying your Core Operator Installation
You can examine the logs of your Core Operator installation to verify that your installation is successful and to diagnose any issues you may encounter using Pipeline CLI.
kubectl logs deploy/calyptia-core-controller-manager
Logs for successfully installed Core Operator looks like this:
{"level":"info","ts":"2023-09-19T07:24:55Z","logger":"controller-runtime.metrics","msg":"Metrics server is starting to listen","addr":":8080"}
{"level":"info","ts":"2023-09-19T07:24:55Z","logger":"setup","msg":"Successfully started","controller":"Deployment"}
{"level":"info","ts":"2023-09-19T07:24:55Z","logger":"setup","msg":"Successfully started","controller":"DaemonSet"}
{"level":"info","ts":"2023-09-19T07:24:55Z","logger":"setup","msg":"Successfully started","controller":"Secret"}
{"level":"info","ts":"2023-09-19T07:24:55Z","logger":"setup","msg":"starting manager"}
{"level":"info","ts":"2023-09-19T07:24:55Z","msg":"Starting server","path":"/metrics","kind":"metrics","addr":"[::]:8080"}
{"level":"info","ts":"2023-09-19T07:24:55Z","msg":"Starting server","kind":"health probe","addr":"[::]:8081"}
{"level":"info","ts":"2023-09-19T07:24:55Z","msg":"Starting EventSource","controller":"daemonset","controllerGroup":"apps","controllerKind":"DaemonSet","source":"kind source: *v1.DaemonSet"}
{"level":"info","ts":"2023-09-19T07:24:55Z","msg":"Starting Controller","controller":"daemonset","controllerGroup":"apps","controllerKind":"DaemonSet"}
{"level":"info","ts":"2023-09-19T07:24:55Z","msg":"Starting EventSource","controller":"pipeline","controllerGroup":"core.calyptia.com","controllerKind":"Pipeline","source":"kind source: *v1.Pipeline"}
{"level":"info","ts":"2023-09-19T07:24:55Z","msg":"Starting EventSource","controller":"pipeline","controllerGroup":"core.calyptia.com","controllerKind":"Pipeline","source":"kind source: *v1.Deployment"}
{"level":"info","ts":"2023-09-19T07:24:55Z","msg":"Starting EventSource","controller":"pipeline","controllerGroup":"core.calyptia.com","controllerKind":"Pipeline","source":"kind source: *v1.DaemonSet"}
{"level":"info","ts":"2023-09-19T07:24:55Z","msg":"Starting EventSource","controller":"pipeline","controllerGroup":"core.calyptia.com","controllerKind":"Pipeline","source":"kind source: *v1.ConfigMap"}
{"level":"info","ts":"2023-09-19T07:24:55Z","msg":"Starting EventSource","controller":"pipeline","controllerGroup":"core.calyptia.com","controllerKind":"Pipeline","source":"kind source: *v1.Service"}
{"level":"info","ts":"2023-09-19T07:24:55Z","msg":"Starting Controller","controller":"pipeline","controllerGroup":"core.calyptia.com","controllerKind":"Pipeline"}
{"level":"info","ts":"2023-09-19T07:24:55Z","msg":"Starting EventSource","controller":"deployment","controllerGroup":"apps","controllerKind":"Deployment","source":"kind source: *v1.Deployment"}
{"level":"info","ts":"2023-09-19T07:24:55Z","msg":"Starting Controller","controller":"deployment","controllerGroup":"apps","controllerKind":"Deployment"}
{"level":"info","ts":"2023-09-19T07:24:55Z","msg":"Starting EventSource","controller":"secret","controllerGroup":"","controllerKind":"Secret","source":"kind source: *v1.Secret"}
{"level":"info","ts":"2023-09-19T07:24:55Z","msg":"Starting Controller","controller":"secret","controllerGroup":"","controllerKind":"Secret"}
{"level":"info","ts":"2023-09-19T07:24:55Z","msg":"Starting workers","controller":"daemonset","controllerGroup":"apps","controllerKind":"DaemonSet","worker count":1}
{"level":"info","ts":"2023-09-19T07:24:55Z","msg":"Starting workers","controller":"deployment","controllerGroup":"apps","controllerKind":"Deployment","worker count":1}
{"level":"info","ts":"2023-09-19T07:24:55Z","msg":"Starting workers","controller":"secret","controllerGroup":"","controllerKind":"Secret","worker count":1}
{"level":"info","ts":"2023-09-19T07:24:55Z","msg":"Starting workers","controller":"pipeline","controllerGroup":"core.calyptia.com","controllerKind":"Pipeline","worker count":1}
Troubleshooting Core Installation
A Core instance is a single deployment with two containers that synchronizes pipeline resources between both the Cloud API and the cluster.
to-cloud
container pushes every newly created/updated pipeline to Cloud API and ensures that namespaces from the cluster are synchronized with Cloud API.from-cloud
container pulls all pipelines from CloudAPI and pushes them to the cluster as Pipeline Custom Resources.
To get a list of your core instances, use the following command:
calyptia get core_instances
You should get a list of your core instances from where you can see your running, failed, and unreachable instances.
The results will be similar to the following:
NAME VERSION ENVIRONMENT PIPELINES TAGS STATUS AGE
coreinstance01 v1.0.10 default 1 running 4 minutes
coretesting v1.0.10 default 1 unreachable 5 minutes
The coretesting
instance is unreachable, and coreinstance01
is running.
To obtain more details about your core instance, you can view the logs using
kubectl logs
from each container by specifying your core instance's container name
with your pod name.
kubectl logs {CORE_POD_NAME} from-cloud
The logs from a running from-cloud
sync container look like this:
023-09-25T11:54:58Z INFO Sync from Cloud
2023-09-25T11:54:58Z INFO found 1 pipelines. Syncing...
2023-09-25T11:54:58Z INFO Pipelines found in Cloud but not in the cluster: 0
2023-09-25T11:54:58Z INFO Syncing pipeline {"name": "health-check-9482"}
2023-09-25T11:54:58Z INFO Pipeline already exists updating {"name": "health-check-9482"}
2023-09-25T11:55:13Z INFO Sync from Cloud
2023-09-25T11:55:14Z INFO found 1 pipelines. Syncing...
2023-09-25T11:55:14Z INFO Pipelines found in Cloud but not in the cluster: 0
2023-09-25T11:55:14Z INFO Syncing pipeline {"name": "health-check-9482"}
2023-09-25T11:55:14Z INFO Pipeline already exists updating {"name": "health-check-9482"}
2023-09-25T11:55:28Z INFO Sync from Cloud
2023-09-25T11:55:29Z INFO found 1 pipelines. Syncing...
2023-09-25T12:28:59Z INFO Pipelines found in Cloud but not in the cluster: 0
2023-09-25T12:28:59Z INFO Syncing pipeline {"name": "health-check-9482"}
2023-09-25T12:28:59Z INFO Pipeline already exists updating {"name": "health-check-9482"}
2023-09-25T12:29:13Z INFO Sync from Cloud
2023-09-25T12:29:14Z INFO found 1 pipelines. Syncing...
2023-09-25T12:31:28Z INFO Pipelines found in Cloud but not in the cluster: 0
2023-09-25T12:31:28Z INFO Syncing pipeline {"name": "health-check-9482"}
2023-09-25T12:31:28Z INFO Pipeline already exists updating {"name": "health-check-9482"}
To obtain the logs from your to-cloud
sync container, append your container name to
the command, similar to the previous section.
kubectl logs {CORE_POD_NAME} to-cloud
The logs from a running to-cloud
sync container look like this:
2023-09-25T11:54:58Z INFO Namespace created: {"name": "core-operator"}
2023-09-25T11:54:58Z INFO namespace already exists skipping: {"name": "core-operator"}
2023-09-25T11:54:58Z INFO Namespace created: {"name": "default"}
2023-09-25T11:54:58Z INFO namespace already exists skipping: {"name": "default"}
2023-09-25T11:54:58Z INFO Namespace created: {"name": "kube-node-lease"}
2023-09-25T11:54:58Z INFO namespace already exists skipping: {"name": "kube-node-lease"}
2023-09-25T11:54:58Z INFO Namespace created: {"name": "kube-public"}
2023-09-25T11:54:59Z INFO namespace already exists skipping: {"name": "kube-public"}
2023-09-25T11:54:59Z INFO Namespace created: {"name": "kube-system"}
2023-09-25T11:54:59Z INFO namespace already exists skipping: {"name": "kube-system"}