This is a cache of https://docs.openshift.com/container-platform/4.4/logging/cluster-logging-troubleshooting.html. It is a snapshot of the page at 2024-11-26T01:39:05.612+0000.
Troubleshooting Kibana | Logging | OpenShift Container Platform 4.4
×

Using the Kibana console with OpenShift Container Platform can cause problems that are easily solved, but are not accompanied with useful error messages. Check the following troubleshooting sections if you are experiencing any problems when deploying Kibana on OpenShift Container Platform.

Troubleshooting a Kubernetes login loop

The OAuth2 proxy on the Kibana console must share a secret with the master host’s OAuth2 server. If the secret is not identical on both servers, it can cause a login loop where you are continuously redirected back to the Kibana login page.

Procedure

To fix this issue:

  1. Run the following command to delete the current OAuthClient:

    $ oc delete oauthclient/kibana-proxy

Troubleshooting a Kubernetes cryptic error when viewing the Kibana console

When attempting to visit the Kibana console, you may receive a browser error instead:

{"error":"invalid_request","error_description":"The request is missing a required parameter,
 includes an invalid parameter value, includes a parameter more than once, or is otherwise malformed."}

This can be caused by a mismatch between the OAuth2 client and server. The return address for the client must be in a whitelist so the server can securely redirect back after logging in.

Fix this issue by replacing the OAuthClient entry.

Procedure

To replace the OAuthClient entry:

  1. Run the following command to delete the current OAuthClient:

    $ oc delete oauthclient/kibana-proxy

If the problem persists, check that you are accessing Kibana at a URL listed in the OAuth client. This issue can be caused by accessing the URL at a forwarded port, such as 1443 instead of the standard 443 HTTPS port. You can adjust the server whitelist by editing the OAuth client:

$ oc edit oauthclient/kibana-proxy

Troubleshooting a Kubernetes 503 error when viewing the Kibana console

If you receive a proxy error when viewing the Kibana console, it could be caused by one of two issues:

  • Kibana might not be recognizing pods. If Elasticsearch is slow in starting up, Kibana may timeout trying to reach it. Check whether the relevant service has any endpoints:

    $ oc describe service kibana
    Name:                   kibana
    [...]
    Endpoints:              <none>

    If any Kibana pods are live, endpoints are listed. If they are not, check the state of the Kibana pods and deployment. You might have to scale the deployment down and back up again.

  • The route for accessing the Kibana service is masked. This can happen if you perform a test deployment in one project, then deploy in a different project without completely removing the first deployment. When multiple routes are sent to the same destination, the default router will only route to the first created. Check the problematic route to see if it is defined in multiple places:

    $ oc get route  --all-namespaces --selector logging-infra=support