$ oc adm must-gather \
--image=registry.redhat.io/container-native-virtualization/cnv-must-gather-rhel9:v4.20.0
You can use the following tools to get debugging information about your OKD cluster.
The oc adm must-gather CLI command collects the information from your cluster that is most likely needed for debugging issues, including:
Resource definitions
Service logs
By default, the oc adm must-gather command uses the default plugin image and writes into ./must-gather.local.
Alternatively, you can collect specific information by running the command with the appropriate arguments as described in the following sections:
To collect data related to one or more specific features, use the --image argument with an image, as listed in a following section.
For example:
$ oc adm must-gather \
--image=registry.redhat.io/container-native-virtualization/cnv-must-gather-rhel9:v4.20.0
To collect the audit logs, use the -- /usr/bin/gather_audit_logs argument, as described in a following section.
For example:
$ oc adm must-gather -- /usr/bin/gather_audit_logs
|
When you run oc adm must-gather, a new pod with a random name is created in a new project on the cluster. The data is collected on that pod and saved in a new directory that starts with must-gather.local in the current working directory.
For example:
NAMESPACE NAME READY STATUS RESTARTS AGE
...
openshift-must-gather-5drcj must-gather-bklx4 2/2 Running 0 72s
openshift-must-gather-5drcj must-gather-s8sdh 2/2 Running 0 72s
...
Optionally, you can run the oc adm must-gather command in a specific namespace by using the --run-namespace option.
For example:
$ oc adm must-gather --run-namespace <namespace> \
--image=registry.redhat.io/container-native-virtualization/cnv-must-gather-rhel9:v4.20.0
You can gather debugging information about specific features by using the oc adm must-gather CLI command with the --image or --image-stream argument. The must-gather tool supports multiple images, so you can gather data about more than one feature by running a single command.
| Image | Purpose |
|---|---|
|
Data collection for KubeVirt. |
|
Data collection for Knative. |
|
Data collection for service mesh. |
|
Data collection for migration-related information. |
|
Data collection for OpenShift Data Foundation. |
|
Data collection for OpenShift Logging. |
|
Data collection for Local Storage Operator. |
|
Data collection for the secrets Store CSI Driver Operator. |
You have access to the cluster as a user with the cluster-admin role.
The OKD CLI (oc) is installed.
Navigate to the directory where you want to store the must-gather data.
Run the oc adm must-gather command with one or more --image or --image-stream arguments. For example, the following command gathers both the default cluster data and information specific to KubeVirt:
$ oc adm must-gather \
--image-stream=openshift/must-gather \ (1)
--image=quay.io/kubevirt/must-gather (2)
| 1 | The default OKD must-gather image |
| 2 | The must-gather image for KubeVirt |
Gathering debugging data for the Custom Metrics Autoscaler.
You can gather network logs on all nodes in a cluster.
Run the oc adm must-gather command with -- gather_network_logs:
$ oc adm must-gather -- gather_network_logs
|
By default, the |
Create a compressed file from the must-gather directory that was just created in your working directory. Make sure you provide the date and cluster ID for the unique must-gather data. For more information about how to find the cluster ID, see How to find the cluster-id or name on OpenShift cluster. For example, on a computer that uses a Linux operating system, run the following command:
$ tar cvaf must-gather-`date +"%m-%d-%Y-%H-%M-%S"`-<cluster_id>.tar.gz <must_gather_local_dir>(1)
| 1 | Replace <must_gather_local_dir> with the actual directory name. |
Attach the compressed file to your support case on the the Customer Support page of the Red Hat Customer Portal.
When using the oc adm must-gather command to collect data the default maximum storage for the information is 30% of the storage capacity of the container. After the 30% limit is reached the container is killed and the gathering process stops. Information already gathered is downloaded to your local storage. To run the must-gather command again, you need either a container with more storage capacity or to adjust the maximum volume percentage.
If the container reaches the storage limit, an error message similar to the following example is generated.
Disk usage exceeds the volume percentage of 30% for mounted directory. Exiting...
You have access to the cluster as a user with the cluster-admin role.
The OpenShift CLI (oc) is installed.
Run the oc adm must-gather command with the volume-percentage flag. The new value cannot exceed 100.
$ oc adm must-gather --volume-percentage <storage_percentage>
Support Log Gather Operator builds on the functionality of the traditional must-gather tool to automate the collection of debugging data. It streamlines troubleshooting by packaging the collected information into a single .tar file and automatically uploading it to the specified Red Hat Support case.
|
Support Log Gather is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope. |
The key features of Support Log Gather include the following:
No administrator privileges required: Enables you to collect and upload logs without needing elevated permissions, making it easier for non-administrators to gather data securely.
Simplified log collection: Collects debugging data from the cluster, such as resource definitions and service logs.
Configurable data upload: Provides configuration options to either automatically upload the .tar file to a support case, or store it locally for manual upload.
You can use the web console to install the Support Log Gather.
|
Support Log Gather is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope. |
You have access to the cluster with cluster-admin privileges.
You have access to the OKD web console.
Log in to the OKD web console.
Navigate to Ecosystem → Software Catalog.
In the filter box, enter Support Log Gather.
Select Support Log Gather.
From Version list, select the Support Log Gather version , and click Install.
On the Install Operator page, configure the installation settings:
Choose the Installed Namespace for the Operator.
The default Operator namespace is must-gather-operator. The must-gather-operator namespace is created automatically if it does not exist.
Select an Update approval strategy:
Select Automatic to have the Operator Lifecycle Manager (OLM) update the Operator automatically when a newer version is available.
Select Manual if Operator updates must be approved by a user with appropriate credentials.
Click Install.
Verify that the Operator is installed successfully:
Navigate to Ecosystem → Software Catalog.
Verify that Support Log Gather is listed with a Status of Succeeded in the must-gather-operator namespace.
Verify that Support Log Gather pods are running:
Navigate to Workloads → Pods
Verify that the status of the Support Log Gather pods is Running.
You can use the Support Log Gather only after the pods are up and running.
To enable automated log collection for support cases, you can install Support Log Gather from the command-line interface (CLI).
|
Support Log Gather is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope. |
You have access to the cluster with cluster-admin privileges.
Create a new project named must-gather-operator by running the following command:
$ oc new-project must-gather-operator
Create an OperatorGroup object:
Create a YAML file, for example, operatorGroup.yaml, that defines the OperatorGroup object:
apiVersion: operators.coreos.com/v1
kind: OperatorGroup
metadata:
name: must-gather-operator
namespace: must-gather-operator
spec: {}
Create the OperatorGroup object by running the following command:
$ oc create -f operatorGroup.yaml
Create a Subscription object:
Create a YAML file, for example, subscription.yaml, that defines the Subscription object:
apiVersion: operators.coreos.com/v1alpha1
kind: Subscription
metadata:
name: support-log-gather-operator
namespace: must-gather-operator
spec:
channel: tech-preview
name: support-log-gather-operator
source: redhat-operators
sourceNamespace: openshift-marketplace
installPlanApproval: Automatic
Create the Subscription object by running the following command:
$ oc create -f subscription.yaml
Verify the status of the pods in the Operator namespace by running the following command.
$ oc get pods
NAME READY STATUS RESTARTS AGE
must-gather-operator-657fc74d64-2gg2w 1/1 Running 0 13m
The status of all the pods must be Running.
Verify that the subscription is created by running the following command:
$ oc get subscription -n must-gather-operator
NAME PACKAGE SOURCE CHANNEL
support-log-gather-operator support-log-gather-operator redhat-operators tech-preview
Verify that the Operator is installed by running the following command:
$ oc get csv -n must-gather-operator
NAME DISPLAY VERSION REPLACES PHASE
support-log-gather-operator.v4.20.0 support log gather 4.20.0 Succeeded
You must create a MustGather custom resource (CR) from the command-line interface (CLI) to automate the collection of diagnostic data from your cluster. This process also automatically uploads the data to a Red Hat Support case.
|
Support Log Gather is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope. |
You have installed the OpenShift CLI (oc) tool.
You have installed Support Log Gather in your cluster.
You have a Red Hat Support case ID.
You have created a Kubernetes secret containing your Red Hat Customer Portal credentials. The secret must contain a username field and a password field.
You have created a service account.
Create a YAML file for the MustGather CR, such as support-log-gather.yaml, that contains the following basic configuration::
support-log-gather.yamlapiVersion: operator.openshift.io/v1alpha1
kind: MustGather
metadata:
name: example-mg
namespace: must-gather-operator
spec:
serviceAccountName: must-gather-operator
audit: true
proxyConfig:
httpProxy: "http://proxy.example.com:8080"
httpsProxy: "https://proxy.example.com:8443"
noProxy: ".example.com,localhost"
mustGatherTimeout: "1h30m9s"
uploadTarget:
type: SFTP
sftp:
caseID: "04230315"
caseManagementAccountSecretRef:
name: mustgather-creds
host: "sftp.access.redhat.com"
retainResourcesOnCompletion: true
storage:
type: PersistentVolume
persistentVolume:
claim:
name: mustgather-pvc
subPath: must-gather-bundles/case-04230315
For more information on the configuration parameters, see "Configuration parameters for MustGather custom resource".
Create the MustGather object by running the following command:
$ oc create -f support-log-gather.yaml
Verify that the MustGather CR was created by running the following command:
$ oc get mustgather
NAME AGE
example-mg 7s
Verify the status of the pods in the Operator namespace by running the following command.
$ oc get pods
NAME READY STATUS RESTARTS AGE
must-gather-operator-657fc74d64-2gg2w 1/1 Running 0 13m
example-mg-gk8m8 2/2 Running 0 13s
A new pod with a name based on the MustGather CR must be created. The status of all the pods must be Running.
To monitor the progress of the file upload, view the logs of the upload container in the job pod by running the following command:
oc logs -f pod/example-mg-gk8m8 -c upload
When successful, the process must create an archive and upload it to the Red Hat Secure File Transfer Protocol (SFTP) server for the specified case.
You can manage your MustGather custom resource (CR) by creating a YAML file that specifies the parameters for data collection and the upload process.
The following table provides an overview of the parameters that you can configure in the MustGather CR.
| Parameter name | Description | Type |
|---|---|---|
|
Optional: Specifies whether to collect audit logs. The valid values are |
|
|
Optional: Specifies the time limit for the |
The value must be a floating-point number with a time unit. The valid units are |
|
Optional: Defines the proxy configuration to be used. The default value is set to the cluster-level proxy configuration. |
|
|
Specifies the URL of the proxy for HTTP requests. |
URL |
|
Specifies the URL of the proxy for HTTPS requests. |
|
|
Specifies a comma-separated list of domains for which the proxy must not be used. |
List of URLs |
|
Optional: Specifies whether to retain the |
|
|
Optional: Specifies the name of the service account. The default value is |
|
|
Optional: Defines the storage configuration for the |
|
|
Defines the details of the persistent volume. |
|
|
Defines the details of the persistent volume claim (PVC). |
|
|
Specifies the name of the PVC to be used for storage. |
|
|
Optional: Specifies the path within the PVC to store the bundle. |
|
|
Defines the type of storage. The only supported value is |
|
|
Optional: Defines the upload location for the |
|
|
Optional: Specifies the destination server for the bundle upload. By default, the bundle is uploaded to |
By default, the bundle is uploaded to |
|
Specifies the Red Hat Support case ID for which the diagnostic data is collected. |
|
|
Defines the credentials required for authenticating and uploading the files to the Red Hat Customer Portal support case. The value must contain a |
|
|
Specifies the name of the Kubernetes secret that contains the credentials. |
|
|
Optional: Specifies whether the user provided in the |
|
|
Specifies the type of upload location for the |
|
|
If you do not specify |
You can uninstall the Support Log Gather by using the web console.
You have access to the cluster with cluster-admin privileges.
You have access to the OKD web console.
The Support Log Gather is installed.
Log in to the OKD web console.
Uninstall the Support Log Gather Operator.
Navigate to Ecosystem → Installed Operators.
Click the Options menu next to the Support Log Gather entry and click Uninstall Operator.
In the confirmation dialog, click Uninstall.
Once you have uninstalled the Support Log Gather, you can remove the associated resources from your cluster.
You have access to the cluster with cluster-admin privileges.
You have access to the OKD web console.
Log in to the OKD web console.
Delete the component deployments in the must-gather-operator namespace.:
Click the Project drop-down menu to view the list of all available projects, and select the must-gather-operator project.
Navigate to Workloads → Deployments.
Select the deployment that you want to delete.
Click the Actions drop-down menu, and select Delete Deployment.
In the confirmation dialog box, click Delete to delete the deployment.
Alternatively, delete deployments of the components present in the must-gather-operator namespace by using the command-line interface (CLI).
$ oc delete deployment -n must-gather-operator -l operators.coreos.com/support-log-gather-operator.must-gather-operator
Optional: Remove the custom resource definitions (CRDs) that were installed by the Support Log Gather:
Navigate to Administration → CustomResourceDefinitions.
Enter MustGather in the Name field to filter the CRDs.
Click the Options menu next to each of the following CRDs, and select Delete Custom Resource Definition:
MustGather
Optional: Remove the must-gather-operator namespace.
Navigate to Administration → Namespaces.
Click the Options menu next to the must-gather-operator and select Delete Namespace.
In the confirmation dialog box, enter must-gather-operator and click Delete.
If you experience bootstrap-related issues, you can gather bootkube.service journald unit logs and container logs from the bootstrap node.
You have SSH access to your bootstrap node.
You have the fully qualified domain name of the bootstrap node.
Query bootkube.service journald unit logs from a bootstrap node during OKD installation. Replace <bootstrap_fqdn> with the bootstrap node’s fully qualified domain name:
$ ssh core@<bootstrap_fqdn> journalctl -b -f -u bootkube.service
|
The |
Collect logs from the bootstrap node containers using podman on the bootstrap node. Replace <bootstrap_fqdn> with the bootstrap node’s fully qualified domain name:
$ ssh core@<bootstrap_fqdn> 'for pod in $(sudo podman ps -a -q); do sudo podman logs $pod; done'
You can gather journald unit logs and other logs within /var/log on individual cluster nodes.
You have access to the cluster as a user with the cluster-admin role.
You have installed the OpenShift CLI (oc).
Your API service is still functional.
You have SSH access to your hosts.
Query kubelet journald unit logs from OKD cluster nodes. The following example queries control plane nodes only:
$ oc adm node-logs --role=master -u kubelet (1)
kubelet: Replace as appropriate to query other unit logs.
Collect logs from specific subdirectories under /var/log/ on cluster nodes.
Retrieve a list of logs contained within a /var/log/ subdirectory. The following example lists files in /var/log/openshift-apiserver/ on all control plane nodes:
$ oc adm node-logs --role=master --path=openshift-apiserver
Inspect a specific log within a /var/log/ subdirectory. The following example outputs /var/log/openshift-apiserver/audit.log contents from all control plane nodes:
$ oc adm node-logs --role=master --path=openshift-apiserver/audit.log
If the API is not functional, review the logs on each node using SSH instead. The following example tails /var/log/openshift-apiserver/audit.log:
$ ssh core@<master-node>.<cluster_name>.<base_domain> sudo tail -f /var/log/openshift-apiserver/audit.log
|
OKD 4 cluster nodes running Fedora CoreOS (FCOS) are immutable and rely on Operators to apply cluster changes. Accessing cluster nodes by using SSH is not recommended. Before attempting to collect diagnostic data over SSH, review whether the data collected by running |
Sometimes, troubleshooting a network-related issue is simplified by tracing network communication and capturing packets on multiple nodes at the same time.
You can use a combination of the oc adm must-gather command and the quay.io/openshift/origin-network-tools:latest container image to gather packet captures from nodes.
Analyzing packet captures can help you troubleshoot network communication issues.
The oc adm must-gather command is used to run the tcpdump command in pods on specific nodes.
The tcpdump command records the packet captures in the pods.
When the tcpdump command exits, the oc adm must-gather command transfers the files with the packet captures from the pods to your client machine.
|
The sample command in the following procedure demonstrates performing a packet capture with the |
You are logged in to OKD as a user with the cluster-admin role.
You have installed the OpenShift CLI (oc).
Run a packet capture from the host network on some nodes by running the following command:
$ oc adm must-gather \
--dest-dir /tmp/captures \ (1)
--source-dir '/tmp/tcpdump/' \ (2)
--image quay.io/openshift/origin-network-tools:latest \ (3)
--node-selector 'node-role.kubernetes.io/worker' \ (4)
--host-network=true \ (5)
--timeout 30s \ (6)
-- \
tcpdump -i any \ (7)
-w /tmp/tcpdump/%Y-%m-%dT%H:%M:%S.pcap -W 1 -G 300
| 1 | The --dest-dir argument specifies that oc adm must-gather stores the packet captures in directories that are relative to /tmp/captures on the client machine. You can specify any writable directory. |
| 2 | When tcpdump is run in the debug pod that oc adm must-gather starts, the --source-dir argument specifies that the packet captures are temporarily stored in the /tmp/tcpdump directory on the pod. |
| 3 | The --image argument specifies a container image that includes the tcpdump command. |
| 4 | The --node-selector argument and example value specifies to perform the packet captures on the worker nodes. As an alternative, you can specify the --node-name argument instead to run the packet capture on a single node. If you omit both the --node-selector and the --node-name argument, the packet captures are performed on all nodes. |
| 5 | The --host-network=true argument is required so that the packet captures are performed on the network interfaces of the node. |
| 6 | The --timeout argument and value specify to run the debug pod for 30 seconds. If you do not specify the --timeout argument and a duration, the debug pod runs for 10 minutes. |
| 7 | The -i any argument for the tcpdump command specifies to capture packets on all network interfaces. As an alternative, you can specify a network interface name. |
Perform the action, such as accessing a web application, that triggers the network communication issue while the network trace captures packets.
Review the packet capture files that oc adm must-gather transferred from the pods to your client machine:
tmp/captures
├── event-filter.html
├── ip-10-0-192-217-ec2-internal (1)
│ └── quay.io/openshift/origin-network-tools:latest...
│ └── 2022-01-13T19:31:31.pcap
├── ip-10-0-201-178-ec2-internal (1)
│ └── quay.io/openshift/origin-network-tools:latest...
│ └── 2022-01-13T19:31:30.pcap
├── ip-...
└── timestamp
| 1 | The packet captures are stored in directories that identify the hostname, container, and file name.
If you did not specify the --node-selector argument, then the directory level for the hostname is not present. |
toolboxtoolbox is a tool that starts a container on a Fedora CoreOS (FCOS) system. The tool is primarily used to start a container that includes the required binaries and plugins that are needed to run your favorite debugging or admin tools.
toolbox containerBy default, running the toolbox command starts a container with the quay.io/fedora/fedora image. This image contains the most frequently used support tools. If you need to collect node-specific data that requires a support tool that is not part of the image, you can install additional packages.
You have accessed a node with the oc debug node/<node_name> command.
You can access your system as a user with root privileges.
Set /host as the root directory within the debug shell. The debug pod mounts the host’s root file system in /host within the pod. By changing the root directory to /host, you can run binaries contained in the host’s executable paths:
# chroot /host
Start the toolbox container:
# toolbox
Install the additional package, such as wget:
# dnf install -y <package_name>
toolboxBy default, running the toolbox command starts a container with the quay.io/fedora/fedora image. You can start an alternative image by creating a .toolboxrc file and specifying the image to run.
You have accessed a node with the oc debug node/<node_name> command.
You can access your system as a user with root privileges.
Set /host as the root directory within the debug shell. The debug pod mounts the host’s root file system in /host within the pod. By changing the root directory to /host, you can run binaries contained in the host’s executable paths:
# chroot /host
Optional: If you need to use an alternative image instead of the default image, create a .toolboxrc file in the home directory for the root user ID, and specify the image metadata:
REGISTRY=quay.io (1)
IMAGE=fedora/fedora:latest (2)
TOOLBOX_NAME=toolbox-fedora-latest (3)
| 1 | Optional: Specify an alternative container registry. |
| 2 | Specify an alternative image to start. |
| 3 | Optional: Specify an alternative name for the toolbox container. |
Start a toolbox container by entering the following command:
# toolbox
|
If an existing |