Troubleshooting
Check Deployment Logs
If you run into an error during deployment (i.e., kubectl apply -f kustomized.yaml), you can safely retry the (i.e., kubectl apply -f kustomized.yaml) as the operation is idempotent.
You can also check the log of a specific resource deployment that errored:
You can probably find out more by running: kubectl -n ess logs <resource>Check Status of Your ESS Services
You can check the status of the various ESS services:
kubectl get all -n essThe operation returns the various ESS services and their status (the content has been abbreviated):
NAME READY STATUS RESTARTS AGE
pod/ess-.... 1/1 Running 0 7m30s
...
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/ess-... ClusterIP. 10.105.231.242 <none> 443/TCP,9000/TCP 8m25s
...
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/ess-... 0/1 1 0 7m34s
...
NAME DESIRED CURRENT READY AGE
replicaset.apps/ess-... 1 1 0 7m33sDebug a Service
When a service is not in Running status, you can investigate by issuing the kubectl describe command:
For example, consider the following pod statuses (the status output has been abbreviated):
The pod/strimzi-cluster-operator-655b4f74c8-dg7bc is Running but has 0 instance in Ready state. To investigate, use the kubectl describe command on the resource:
In the output, go to the Events section at the bottom(the output has been abbreviated):
The Events section lists the reason why the service did not start; namely. Review the messages to help diagnose and address any issue.
Alternatively, you can also access the Events information through the kubectl get events command on the resource name strimzi-cluster-operator-655b4f74c8-dg7bc (do not include the resource type, e.g., pod/):
Last updated