Archived docs. ESS 2.0 has reached end of life.

Troubleshoot Installation#

Check Deployment Logs#

If you run into an error during deployment (i.e., kubectl apply -f kustomized.yaml), you can safely retry the (i.e., kubectl apply -f kustomized.yaml) as the operation is idempotent.

You can also check the log of a specific resource deployment that errored:

You can probably find out more by running: kubectl -n ess logs <resource>

Check Status of Your ESS Services#

You can check the status of the various ESS services:

kubectl get all -n ess

The operation returns the various ESS services and their status (the content has been abbreviated):

NAME                                               READY   STATUS             RESTARTS   AGE
pod/ess-....                                       1/1     Running            0          7m30s
...

NAME                                  TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)                      AGE
service/ess-...                       ClusterIP.  10.105.231.242   <none>        443/TCP,9000/TCP             8m25s
...

NAME                                                  READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/ess-...                               0/1     1            0           7m34s
...

NAME                                                             DESIRED   CURRENT   READY   AGE
replicaset.apps/ess-...                                          1         1         0       7m33s

Debug a Service#

When a service is not in Running status, you can investigate by issuing the kubectl describe command:

kubectl describe -n ess <resource>

For example, consider the following pod statuses (the status output has been abbreviated):

NAME                                                  READY   STATUS       RESTARTS   AGE
...

pod/strimzi-cluster-operator-655b4f74c8-dg7bc         0/1     Running      0          29m
...

The pod/strimzi-cluster-operator-655b4f74c8-dg7bc is Running but has 0 instance in Ready state. To investigate, use the kubectl describe command on the resource:

kubectl -n ess describe pod/strimzi-cluster-operator-655b4f74c8-dg7bc

In the output, go to the Events section at the bottom(the output has been abbreviated):

Name:         strimzi-cluster-operator-655b4f74c8-dg7bc
Namespace:    ess
Priority:     0

...

Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                 node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type     Reason     Age                From               Message
  ----     ------     ----               ----               -------
  Normal   Scheduled  30m                default-scheduler  Successfully assigned ess/strimzi-cluster-operator-655b4f74c8-dg7bc to minikube
  Normal   Pulling    26m                kubelet            Pulling image "quay.io/strimzi/operator:0.21.1"
  Normal   Pulled     116s               kubelet            Successfully pulled image "quay.io/strimzi/operator:0.21.1" in 24m56.677087628s
  Normal   Created    110s               kubelet            Created container strimzi-cluster-operator
  Normal   Started    91s                kubelet            Started container strimzi-cluster-operator
  Warning  Unhealthy  23s (x3 over 83s)  kubelet            Liveness probe failed: Get "http://172.17.0.11:8080/healthy": dial tcp 172.17.0.11:8080: connect: connection refused
  Warning  Unhealthy  4s (x3 over 64s)   kubelet            Readiness probe failed: Get "http://172.17.0.11:8080/ready": dial tcp 172.17.0.11:8080: connect: connection refused

The Events section lists the reason why the service did not start; namely. Review the messages to help diagnose and address any issue.

Alternatively, you can also access the Events information through the kubectl get events command on the resource name strimzi-cluster-operator-655b4f74c8-dg7bc (do not include the resource type, e.g., pod/):

kubectl -n ess get events --sort-by=.metadata.creationTimestamp --field-selector involvedObject.name=strimzi-cluster-operator-655b4f74c8-dg7bc
LAST SEEN   TYPE      REASON                 OBJECT                                          MESSAGE
43m         Normal    Scheduled              pod/strimzi-cluster-operator-655b4f74c8-dg7bc   Successfully assigned ess/strimzi-cluster-operator-655b4f74c8-dg7bc to minikube
39m         Normal    Pulling                pod/strimzi-cluster-operator-655b4f74c8-dg7bc   Pulling image "quay.io/strimzi/operator:0.21.1"
14m         Normal    Pulled                 pod/strimzi-cluster-operator-655b4f74c8-dg7bc   Successfully pulled image "quay.io/strimzi/operator:0.21.1" in 24m56.677087628s
9m51s       Normal    Created                pod/strimzi-cluster-operator-655b4f74c8-dg7bc   Created container strimzi-cluster-operator
11m         Normal    Started                pod/strimzi-cluster-operator-655b4f74c8-dg7bc   Started container strimzi-cluster-operator
4m50s       Warning   Unhealthy              pod/strimzi-cluster-operator-655b4f74c8-dg7bc   Liveness probe failed: Get "http://172.17.0.11:8080/healthy": dial tcp 172.17.0.11:8080: connect: connection refused
9m31s       Warning   Unhealthy              pod/strimzi-cluster-operator-655b4f74c8-dg7bc   Readiness probe failed: Get "http://172.17.0.11:8080/ready": dial tcp 172.17.0.11:8080: connect: connection refused