This is a cache of https://docs.okd.io/4.12/observability/cluster_observability_operator/ui_plugins/troubleshooting-ui-plugin.html. It is a snapshot of the page at 2024-11-03T19:09:00.150+0000.
Troubleshooting UI plugin - <strong>cluster</strong> Observability Operator | Observability | OKD 4.12
×

The cluster Observability Operator is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.

For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.

The troubleshooting UI plugin for OKD version 4.16+ provides observability signal correlation, powered by the open source Korrel8r project. With the troubleshooting panel that is available under ObserveAlerting, you can easily correlate metrics, logs, alerts, netflows, and additional observability signals and resources, across different data stores. Users of OKD version 4.17+ can also access the troubleshooting UI panel from the Application Launcher app launcher.

When you install the troubleshooting UI plugin, a Korrel8r service named korrel8r is deployed in the same namespace, and it is able to locate related observability signals and Kubernetes resources from its correlation engine.

The output of Korrel8r is displayed in the form of an interactive node graph in the OKD web console. Nodes in the graph represent a type of resource or signal, while edges represent relationships. When you click on a node, you are automatically redirected to the corresponding web console page with the specific information for that node, for example, metric, log, pod.

Installing the cluster Observability Operator Troubleshooting UI plugin

Prerequisites
  • You have access to the cluster as a user with the cluster-admin cluster role.

  • You have logged in to the OKD web console.

  • You have installed the cluster Observability Operator

Procedure
  1. In the OKD web console, click OperatorsInstalled Operators and select cluster Observability Operator

  2. Choose the UI Plugin tab (at the far right of the tab list) and press Create UIPlugin

  3. Select YAML view, enter the following content, and then press Create:

    apiVersion: observability.openshift.io/v1alpha1
    kind: UIPlugin
    metadata:
      name: troubleshooting-panel
    spec:
      type: TroubleshootingPanel

Using the cluster Observability Operator troubleshooting UI plugin

Prerequisites
  • You have access to the OKD cluster as a user with the cluster-admin cluster role. If your cluster version is 4.17+, you can access the troubleshooting UI panel from the Application Launcher app launcher.

  • You have logged in to the OKD web console.

  • You have installed OKD Logging, if you want to visualize correlated logs.

  • You have installed OKD Network Observability, if you want to visualize correlated netflows.

  • You have installed the cluster Observability Operator.

  • You have installed the cluster Observability Operator troubleshooting UI plugin.

    The troubleshooting panel relies on the observability signal stores installed in your cluster. Kuberenetes resources, alerts and metrics are always available by default in an OKD cluster. Other signal types require optional components to be installed:

    • Logs: Red Hat Openshift Logging (collection) and Loki Operator provided by Red Hat (store)

    • Network events: Network observability provided by Red Hat (collection) and Loki Operator provided by Red Hat (store)

    Procedure
    1. In the admin perspective of the web console, navigate to ObserveAlerting and then select an alert. If the alert has correlated items, a Troubleshooting Panel link will appear above the chart on the alert detail page.

      Troubleshooting Panel link

      Click on the Troubleshooting Panel link to display the panel.

    2. The panel consists of query details and a topology graph of the query results. The selected alert is converted into a Korrel8r query string and sent to the korrel8r service. The results are displayed as a graph network connecting the returned signals and resources. This is a neighbourhood graph, starting at the current resource and including related objects up to 3 steps away from the starting point. Clicking on nodes in the graph takes you to the corresponding web console pages for those resouces.

    3. You can use the troubleshooting panel to find resources relating to the chosen alert.

      Clicking on a node may sometimes show fewer results than indicated on the graph. This is a known issue that will be addressed in a future release.

      Troubleshooting panel
      1. Alert (1): This node is the starting point in the graph and represents the KubeContainerWaiting alert displayed in the web console.

      2. Pod (1): This node indicates that there is a single Pod resource associated with this alert. Clicking on this node will open a console search showing the related pod directly.

      3. Event (2): There are two Kuberenetes events associated with the pod. Click this node to see the events.

      4. Logs (74): This pod has 74 lines of logs, which you can access by clicking on this node.

      5. Metrics (105): There are many metrics associated with the pod.

      6. Network (6): There are network events, meaning the pod has communicated over the network. The remaining nodes in the graph represent the Service, Deployment and DaemonSet resources that the pod has communicated with.

      7. Focus: Clicking this button updates the graph. By default, the graph itself does not change when you click on nodes in the graph. Instead, the main web console page changes, and you can then navigate to other resources using links on the page, while the troubleshooting panel itself stays open and unchanged. To force an update to the graph in the troubleshooting panel, click Focus. This draws a new graph, using the current resource in the web console as the starting point.

      8. Show Query: Clicking this button enables some experimental features:

        Experimental features
        1. Hide Query hides the experimental features.

        2. The query that identifies the starting point for the graph. The query language, part of the Korrel8r correlation engine used to create the graphs, is experimental and may change in future. The query is updated by the Focus button to correspond to the resources in the main web console window.

        3. Neighbourhood depth is used to display a smaller or larger neighbourhood.

          Setting a large value in a large cluster might cause the query to fail, if the number of results is too big.

        4. Goal class results in a goal directed search instead of a neighbourhood search. A goal directed search shows all paths from the starting point to the goal class, which indicates a type of resource or signal. The format of the goal class is experimental and may change. Currently, the following goals are valid:

          • k8s:RESOURCE[VERSION.[GROUP]] identifying a kind of kuberenetes resource. For example k8s:Pod or k8s:Deployment.apps.v1.

          • alert:alert representing any alert.

          • metric:metric representing any metric.

          • netflow:network representing any network observability network event.

          • log:LOG_TYPE representing stored logs, where LOG_TYPE must be one of application, infrastructure or audit.

Creating the example alert

To trigger an alert as a starting point to use in the troubleshooting UI panel, you can deploy a container that is deliberately misconfigured.

Procedure
  1. Use the following YAML, either from the command line or in the web console, to create a broken deployment in a system namespace:

    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: bad-deployment
      namespace: default (1)
    spec:
      selector:
        matchLabels:
          app: bad-deployment
      template:
        metadata:
          labels:
            app: bad-deployment
        spec:
          containers: (2)
          - name: bad-deployment
            image: quay.io/openshift-logging/vector:5.8
    1 The deployment must be in a system namespace (such as default) to cause the desired alerts.
    2 This container deliberately tries to start a vector server with no configuration file. The server logs a few messages, and then exits with an error. Alternatively, you can deploy any container you like that is badly configured, causing it to trigger an alert.
  2. View the alerts:

    1. Go to ObserveAlerting and click clear all filters. View the Pending alerts.

      Alerts first appear in the Pending state. They do not start Firing until the container has been crashing for some time. By viewing Pending alerts, you do not have to wait as long to see them occur.

    2. Choose one of the KubeContainerWaiting, KubePodCrashLooping, or KubePodNotReady alerts and open the troubleshooting panel by clicking on the link. Alternatively, if the panel is already open, click the "Focus" button to update the graph.