$ oc adm must-gather --image=registry.redhat.io/container-native-virtualization/cnv-must-gather-rhel8:v4.9.0
toolbox
When opening a support case, it is helpful to provide debugging information about your cluster to Red Hat Support.
It is recommended to provide:
The oc adm must-gather
CLI command collects the information from your cluster that is most likely needed for debugging issues, including:
Resource definitions
Service logs
By default, the oc adm must-gather
command uses the default plugin image and writes into ./must-gather.local
.
Alternatively, you can collect specific information by running the command with the appropriate arguments as described in the following sections:
To collect data related to one or more specific features, use the --image
argument with an image, as listed in a following section.
For example:
$ oc adm must-gather --image=registry.redhat.io/container-native-virtualization/cnv-must-gather-rhel8:v4.9.0
To collect the audit logs, use the -- /usr/bin/gather_audit_logs
argument, as described in a following section.
For example:
$ oc adm must-gather -- /usr/bin/gather_audit_logs
Audit logs are not collected as part of the default set of information to reduce the size of the files. |
When you run oc adm must-gather
, a new pod with a random name is created in a new project on the cluster. The data is collected on that pod and saved in a new directory that starts with must-gather.local
. This directory is created in the current working directory.
For example:
NAMESPACE NAME READY STATUS RESTARTS AGE
...
openshift-must-gather-5drcj must-gather-bklx4 2/2 Running 0 72s
openshift-must-gather-5drcj must-gather-s8sdh 2/2 Running 0 72s
...
You can gather debugging information about your cluster by using the oc adm must-gather
CLI command.
Access to the cluster as a user with the cluster-admin
role.
The OpenShift Container Platform CLI (oc
) installed.
Navigate to the directory where you want to store the must-gather
data.
If your cluster is using a restricted network, you must take additional steps. If your mirror registry has a trusted CA, you must first add the trusted CA to the cluster. For all clusters on restricted networks, you must import the default
|
Run the oc adm must-gather
command:
$ oc adm must-gather
Because this command picks a random control plane node by default, the pod might be scheduled to a control plane node that is in the |
If this command fails, for example, if you cannot schedule a pod on your cluster, then use the oc adm inspect
command to gather information for particular resources.
Contact Red Hat Support for the recommended resources to gather. |
Create a compressed file from the must-gather
directory that was just created in your working directory. For example, on a computer that uses a Linux
operating system, run the following command:
$ tar cvaf must-gather.tar.gz must-gather.local.5421342344627712289/ (1)
1 | Make sure to replace must-gather-local.5421342344627712289/ with the actual directory name. |
Attach the compressed file to your support case on the Red Hat Customer Portal.
You can gather debugging information about specific features by using the oc adm must-gather
CLI command with the --image
or --image-stream
argument. The must-gather
tool supports multiple images, so you can gather data about more than one feature by running a single command.
Image | Purpose |
---|---|
|
Data collection for OpenShift Virtualization. |
|
Data collection for OpenShift Serverless. |
|
Data collection for Red Hat OpenShift Service Mesh. |
|
Data collection for the Migration Toolkit for Containers. |
|
Data collection for Red Hat OpenShift Container Storage. |
|
Data collection for OpenShift Logging. |
|
Data collection for Local Storage Operator. |
To collect the default |
Access to the cluster as a user with the cluster-admin
role.
The OpenShift Container Platform CLI (oc
) installed.
Navigate to the directory where you want to store the must-gather
data.
Run the oc adm must-gather
command with one or more --image
or --image-stream
arguments. For example, the following command gathers both the default cluster data and information specific to OpenShift Virtualization:
$ oc adm must-gather \
--image-stream=openshift/must-gather \ (1)
--image=registry.redhat.io/container-native-virtualization/cnv-must-gather-rhel8:v4.8.7 (2)
1 | The default OpenShift Container Platform must-gather image |
2 | The must-gather image for OpenShift Virtualization |
You can use the must-gather
tool with additional arguments to gather data that is specifically related to OpenShift Logging and the Red Hat OpenShift Logging Operator in your cluster. For OpenShift Logging, run the following command:
$ oc adm must-gather --image=$(oc -n openshift-logging get deployment.apps/cluster-logging-operator \
-o jsonpath='{.spec.template.spec.containers[?(@.name == "cluster-logging-operator")].image}')
must-gather
output for OpenShift Logging├── cluster-logging
│ ├── clo
│ │ ├── cluster-logging-operator-74dd5994f-6ttgt
│ │ ├── clusterlogforwarder_cr
│ │ ├── cr
│ │ ├── csv
│ │ ├── deployment
│ │ └── logforwarding_cr
│ ├── collector
│ │ ├── fluentd-2tr64
│ ├── eo
│ │ ├── csv
│ │ ├── deployment
│ │ └── elasticsearch-operator-7dc7d97b9d-jb4r4
│ ├── es
│ │ ├── cluster-elasticsearch
│ │ │ ├── aliases
│ │ │ ├── health
│ │ │ ├── indices
│ │ │ ├── latest_documents.json
│ │ │ ├── nodes
│ │ │ ├── nodes_stats.json
│ │ │ └── thread_pool
│ │ ├── cr
│ │ ├── elasticsearch-cdm-lp8l38m0-1-794d6dd989-4jxms
│ │ └── logs
│ │ ├── elasticsearch-cdm-lp8l38m0-1-794d6dd989-4jxms
│ ├── install
│ │ ├── co_logs
│ │ ├── install_plan
│ │ ├── olmo_logs
│ │ └── subscription
│ └── kibana
│ ├── cr
│ ├── kibana-9d69668d4-2rkvz
├── cluster-scoped-resources
│ └── core
│ ├── nodes
│ │ ├── ip-10-0-146-180.eu-west-1.compute.internal.yaml
│ └── persistentvolumes
│ ├── pvc-0a8d65d9-54aa-4c44-9ecc-33d9381e41c1.yaml
├── event-filter.html
├── gather-debug.log
└── namespaces
├── openshift-logging
│ ├── apps
│ │ ├── daemonsets.yaml
│ │ ├── deployments.yaml
│ │ ├── replicasets.yaml
│ │ └── statefulsets.yaml
│ ├── batch
│ │ ├── cronjobs.yaml
│ │ └── jobs.yaml
│ ├── core
│ │ ├── configmaps.yaml
│ │ ├── endpoints.yaml
│ │ ├── events
│ │ │ ├── elasticsearch-im-app-1596020400-gm6nl.1626341a296c16a1.yaml
│ │ │ ├── elasticsearch-im-audit-1596020400-9l9n4.1626341a2af81bbd.yaml
│ │ │ ├── elasticsearch-im-infra-1596020400-v98tk.1626341a2d821069.yaml
│ │ │ ├── elasticsearch-im-app-1596020400-cc5vc.1626341a3019b238.yaml
│ │ │ ├── elasticsearch-im-audit-1596020400-s8d5s.1626341a31f7b315.yaml
│ │ │ ├── elasticsearch-im-infra-1596020400-7mgv8.1626341a35ea59ed.yaml
│ │ ├── events.yaml
│ │ ├── persistentvolumeclaims.yaml
│ │ ├── pods.yaml
│ │ ├── replicationcontrollers.yaml
│ │ ├── secrets.yaml
│ │ └── services.yaml
│ ├── openshift-logging.yaml
│ ├── pods
│ │ ├── cluster-logging-operator-74dd5994f-6ttgt
│ │ │ ├── cluster-logging-operator
│ │ │ │ └── cluster-logging-operator
│ │ │ │ └── logs
│ │ │ │ ├── current.log
│ │ │ │ ├── previous.insecure.log
│ │ │ │ └── previous.log
│ │ │ └── cluster-logging-operator-74dd5994f-6ttgt.yaml
│ │ ├── cluster-logging-operator-registry-6df49d7d4-mxxff
│ │ │ ├── cluster-logging-operator-registry
│ │ │ │ └── cluster-logging-operator-registry
│ │ │ │ └── logs
│ │ │ │ ├── current.log
│ │ │ │ ├── previous.insecure.log
│ │ │ │ └── previous.log
│ │ │ ├── cluster-logging-operator-registry-6df49d7d4-mxxff.yaml
│ │ │ └── mutate-csv-and-generate-sqlite-db
│ │ │ └── mutate-csv-and-generate-sqlite-db
│ │ │ └── logs
│ │ │ ├── current.log
│ │ │ ├── previous.insecure.log
│ │ │ └── previous.log
│ │ ├── elasticsearch-cdm-lp8l38m0-1-794d6dd989-4jxms
│ │ ├── elasticsearch-im-app-1596030300-bpgcx
│ │ │ ├── elasticsearch-im-app-1596030300-bpgcx.yaml
│ │ │ └── indexmanagement
│ │ │ └── indexmanagement
│ │ │ └── logs
│ │ │ ├── current.log
│ │ │ ├── previous.insecure.log
│ │ │ └── previous.log
│ │ ├── fluentd-2tr64
│ │ │ ├── fluentd
│ │ │ │ └── fluentd
│ │ │ │ └── logs
│ │ │ │ ├── current.log
│ │ │ │ ├── previous.insecure.log
│ │ │ │ └── previous.log
│ │ │ ├── fluentd-2tr64.yaml
│ │ │ └── fluentd-init
│ │ │ └── fluentd-init
│ │ │ └── logs
│ │ │ ├── current.log
│ │ │ ├── previous.insecure.log
│ │ │ └── previous.log
│ │ ├── kibana-9d69668d4-2rkvz
│ │ │ ├── kibana
│ │ │ │ └── kibana
│ │ │ │ └── logs
│ │ │ │ ├── current.log
│ │ │ │ ├── previous.insecure.log
│ │ │ │ └── previous.log
│ │ │ ├── kibana-9d69668d4-2rkvz.yaml
│ │ │ └── kibana-proxy
│ │ │ └── kibana-proxy
│ │ │ └── logs
│ │ │ ├── current.log
│ │ │ ├── previous.insecure.log
│ │ │ └── previous.log
│ └── route.openshift.io
│ └── routes.yaml
└── openshift-operators-redhat
├── ...
Run the oc adm must-gather
command with one or more --image
or --image-stream
arguments. For example, the following command gathers both the default cluster data and information specific to KubeVirt:
$ oc adm must-gather \
--image-stream=openshift/must-gather \ (1)
--image=quay.io/kubevirt/must-gather (2)
1 | The default OpenShift Container Platform must-gather image |
2 | The must-gather image for KubeVirt |
Create a compressed file from the must-gather
directory that was just created in your working directory. For example, on a computer that uses a Linux
operating system, run the following command:
$ tar cvaf must-gather.tar.gz must-gather.local.5421342344627712289/ (1)
1 | Make sure to replace must-gather-local.5421342344627712289/ with the
actual directory name. |
Attach the compressed file to your support case on the Red Hat Customer Portal.
You can gather audit logs, which are a security-relevant chronological set of records documenting the sequence of activities that have affected the system by individual users, administrators, or other components of the system. You can gather audit logs for:
etcd server
Kubernetes API server
OpenShift OAuth API server
OpenShift API server
Run the oc adm must-gather
command with the -- /usr/bin/gather_audit_logs
flag:
$ oc adm must-gather -- /usr/bin/gather_audit_logs
Create a compressed file from the must-gather
directory that was just created in your working directory. For example, on a computer that uses a Linux operating system, run the following command:
$ tar cvaf must-gather.tar.gz must-gather.local.472290403699006248 (1)
1 | Replace must-gather-local.472290403699006248 with the actual directory name. |
Attach the compressed file to your support case on the Red Hat Customer Portal.
When providing information to Red Hat Support, it is helpful to provide the unique identifier for your cluster. You can have your cluster ID autofilled by using the OpenShift Container Platform web console. You can also manually obtain your cluster ID by using the web console or the OpenShift CLI (oc
).
Access to the cluster as a user with the cluster-admin
role.
Access to the web console or the OpenShift CLI (oc
) installed.
To open a support case and have your cluster ID autofilled using the web console:
From the toolbar, navigate to (?) Help → Open Support Case.
The Cluster ID value is autofilled.
To manually obtain your cluster ID using the web console:
Navigate to Home → Dashboards → Overview.
The value is available in the Cluster ID field of the Details section.
To obtain your cluster ID using the OpenShift CLI (oc
), run the following command:
$ oc get clusterversion -o jsonpath='{.items[].spec.clusterID}{"\n"}'
sosreport
is a tool that collects configuration details, system information, and diagnostic data from Red Hat Enterprise Linux (RHEL) and Red Hat Enterprise Linux CoreOS (RHCOS) systems. sosreport
provides a standardized way to collect diagnostic information relating to a node, which can then be provided to Red Hat Support for issue diagnosis.
In some support interactions, Red Hat Support may ask you to collect a sosreport
archive for a specific OpenShift Container Platform node. For example, it might sometimes be necessary to review system logs or other node-specific data that is not included within the output of oc adm must-gather
.
The recommended way to generate a sosreport
for an OpenShift Container Platform 4.8 cluster node is through a debug pod.
You have access to the cluster as a user with the cluster-admin
role.
You have SSH access to your hosts.
You have installed the OpenShift CLI (oc
).
You have a Red Hat standard or premium Subscription.
You have a Red Hat Customer Portal account.
You have an existing Red Hat Support case ID.
Obtain a list of cluster nodes:
$ oc get nodes
Enter into a debug session on the target node. This step instantiates a debug pod called <node_name>-debug
:
$ oc debug node/my-cluster-node
To enter into a debug session on the target node that is tainted with the NoExecute
effect, add a toleration to a dummy namespace, and start the debug pod in the dummy namespace:
$ oc new-project dummy
$ oc patch namespace dummy --type=merge -p '{"metadata": {"annotations": { "scheduler.alpha.kubernetes.io/defaultTolerations": "[{\"operator\": \"Exists\"}]"}}}'
$ oc debug node/my-cluster-node
Set /host
as the root directory within the debug shell. The debug pod mounts the host’s root file system in /host
within the pod. By changing the root directory to /host
, you can run binaries contained in the host’s executable paths:
# chroot /host
OpenShift Container Platform 4.8 cluster nodes running Red Hat Enterprise Linux CoreOS (RHCOS) are immutable and rely on Operators to apply cluster changes. Accessing cluster nodes using SSH is not recommended and nodes will be tainted as accessed. However, if the OpenShift Container Platform API is not available, or the kubelet is not properly functioning on the target node, |
Start a toolbox
container, which includes the required binaries and plugins to run sosreport
:
# toolbox
If an existing |
Collect a sosreport
archive.
Run the sosreport
command and enable the crio.all
and crio.logs
CRI-O container engine sosreport
plugins:
# sosreport -k crio.all=on -k crio.logs=on (1)
1 | -k enables you to define sosreport plugin parameters outside of the defaults. |
Press Enter when prompted, to continue.
Provide the Red Hat Support case ID. sosreport
adds the ID to the archive’s file name.
The sosreport
output provides the archive’s location and checksum. The following sample output references support case ID 01234567
:
Your sosreport has been generated and saved in:
/host/var/tmp/sosreport-my-cluster-node-01234567-2020-05-28-eyjknxt.tar.xz (1)
The checksum is: 382ffc167510fd71b4f12a4f40b97a4e
1 | The sosreport archive’s file path is outside of the chroot environment because the toolbox container mounts the host’s root directory at /host . |
Provide the sosreport
archive to Red Hat Support for analysis, using one of the following methods.
Upload the file to an existing Red Hat support case directly from an OpenShift Container Platform cluster.
From within the toolbox container, run redhat-support-tool
to attach the archive directly to an existing Red Hat support case. This example uses support case ID 01234567
:
# redhat-support-tool addattachment -c 01234567 /host/var/tmp/my-sosreport.tar.xz (1)
1 | The toolbox container mounts the host’s root directory at /host . Reference the absolute path from the toolbox container’s root directory, including /host/ , when specifying files to upload through the redhat-support-tool command. |
Upload the file to an existing Red Hat support case.
Concatenate the sosreport
archive by running the oc debug node/<node_name>
command and redirect the output to a file. This command assumes you have exited the previous oc debug
session:
$ oc debug node/my-cluster-node -- bash -c 'cat /host/var/tmp/sosreport-my-cluster-node-01234567-2020-05-28-eyjknxt.tar.xz' > /tmp/sosreport-my-cluster-node-01234567-2020-05-28-eyjknxt.tar.xz (1)
1 | The debug container mounts the host’s root directory at /host . Reference the absolute path from the debug container’s root directory, including /host , when specifying target files for concatenation. |
OpenShift Container Platform 4.8 cluster nodes running Red Hat Enterprise Linux CoreOS (RHCOS) are immutable and rely on Operators to apply cluster changes. Transferring a |
Navigate to an existing support case within https://access.redhat.com/support/cases/.
Select Attach files and follow the prompts to upload the file.
If you experience bootstrap-related issues, you can gather bootkube.service
journald
unit logs and container logs from the bootstrap node.
You have SSH access to your bootstrap node.
You have the fully qualified domain name of the bootstrap node.
Query bootkube.service
journald
unit logs from a bootstrap node during OpenShift Container Platform installation. Replace <bootstrap_fqdn>
with the bootstrap node’s fully qualified domain name:
$ ssh core@<bootstrap_fqdn> journalctl -b -f -u bootkube.service
The |
Collect logs from the bootstrap node containers using podman
on the bootstrap node. Replace <bootstrap_fqdn>
with the bootstrap node’s fully qualified domain name:
$ ssh core@<bootstrap_fqdn> 'for pod in $(sudo podman ps -a -q); do sudo podman logs $pod; done'
You can gather journald
unit logs and other logs within /var/log
on individual cluster nodes.
You have access to the cluster as a user with the cluster-admin
role.
Your API service is still functional.
You have installed the OpenShift CLI (oc
).
You have SSH access to your hosts.
Query kubelet
journald
unit logs from OpenShift Container Platform cluster nodes. The following example queries control plane nodes (also known as the master nodes) only:
$ oc adm node-logs --role=master -u kubelet (1)
1 | Replace kubelet as appropriate to query other unit logs. |
Collect logs from specific subdirectories under /var/log/
on cluster nodes.
Retrieve a list of logs contained within a /var/log/
subdirectory. The following example lists files in /var/log/openshift-apiserver/
on all control plane nodes:
$ oc adm node-logs --role=master --path=openshift-apiserver
Inspect a specific log within a /var/log/
subdirectory. The following example outputs /var/log/openshift-apiserver/audit.log
contents from all control plane nodes:
$ oc adm node-logs --role=master --path=openshift-apiserver/audit.log
If the API is not functional, review the logs on each node using SSH instead. The following example tails /var/log/openshift-apiserver/audit.log
:
$ ssh core@<master-node>.<cluster_name>.<base_domain> sudo tail -f /var/log/openshift-apiserver/audit.log
OpenShift Container Platform 4.8 cluster nodes running Red Hat Enterprise Linux CoreOS (RHCOS) are immutable and rely on Operators to apply cluster changes. Accessing cluster nodes using SSH is not recommended and nodes will be tainted as accessed. Before attempting to collect diagnostic data over SSH, review whether the data collected by running |
When investigating potential network-related OpenShift Container Platform issues, Red Hat Support might request a network packet trace from a specific OpenShift Container Platform cluster node or from a specific container. The recommended method to capture a network trace in OpenShift Container Platform is through a debug pod.
You have access to the cluster as a user with the cluster-admin
role.
You have installed the OpenShift CLI (oc
).
You have a Red Hat standard or premium Subscription.
You have a Red Hat Customer Portal account.
You have an existing Red Hat Support case ID.
You have SSH access to your hosts.
Obtain a list of cluster nodes:
$ oc get nodes
Enter into a debug session on the target node. This step instantiates a debug pod called <node_name>-debug
:
$ oc debug node/my-cluster-node
Set /host
as the root directory within the debug shell. The debug pod mounts the host’s root file system in /host
within the pod. By changing the root directory to /host
, you can run binaries contained in the host’s executable paths:
# chroot /host
OpenShift Container Platform 4.8 cluster nodes running Red Hat Enterprise Linux CoreOS (RHCOS) are immutable and rely on Operators to apply cluster changes. Accessing cluster nodes using SSH is not recommended and nodes will be tainted as accessed. However, if the OpenShift Container Platform API is not available, or the kubelet is not properly functioning on the target node, |
From within the chroot
environment console, obtain the node’s interface names:
# ip ad
Start a toolbox
container, which includes the required binaries and plugins to run sosreport
:
# toolbox
If an existing |
Initiate a tcpdump
session on the cluster node and redirect output to a capture file. This example uses ens5
as the interface name:
$ tcpdump -nn -s 0 -i ens5 -w /host/var/tmp/my-cluster-node_$(date +%d_%m_%Y-%H_%M_%S-%Z).pcap (1)
1 | The tcpdump capture file’s path is outside of the chroot environment because the toolbox container mounts the host’s root directory at /host . |
If a tcpdump
capture is required for a specific container on the node, follow these steps.
Determine the target container ID. The chroot host
command precedes the crictl
command in this step because the toolbox container mounts the host’s root directory at /host
:
# chroot /host crictl ps
Determine the container’s process ID. In this example, the container ID is a7fe32346b120
:
# chroot /host crictl inspect --output yaml a7fe32346b120 | grep 'pid' | awk '{print $2}'
Initiate a tcpdump
session on the container and redirect output to a capture file. This example uses 49628
as the container’s process ID and ens5
as the interface name. The nsenter
command enters the namespace of a target process and runs a command in its namespace. because the target process in this example is a container’s process ID, the tcpdump
command is run in the container’s namespace from the host:
# nsenter -n -t 49628 -- tcpdump -nn -i ens5 -w /host/var/tmp/my-cluster-node-my-container_$(date +%d_%m_%Y-%H_%M_%S-%Z).pcap.pcap (1)
1 | The tcpdump capture file’s path is outside of the chroot environment because the toolbox container mounts the host’s root directory at /host . |
Provide the tcpdump
capture file to Red Hat Support for analysis, using one of the following methods.
Upload the file to an existing Red Hat support case directly from an OpenShift Container Platform cluster.
From within the toolbox container, run redhat-support-tool
to attach the file directly to an existing Red Hat Support case. This example uses support case ID 01234567
:
# redhat-support-tool addattachment -c 01234567 /host/var/tmp/my-tcpdump-capture-file.pcap (1)
1 | The toolbox container mounts the host’s root directory at /host . Reference the absolute path from the toolbox container’s root directory, including /host/ , when specifying files to upload through the redhat-support-tool command. |
Upload the file to an existing Red Hat support case.
Concatenate the sosreport
archive by running the oc debug node/<node_name>
command and redirect the output to a file. This command assumes you have exited the previous oc debug
session:
$ oc debug node/my-cluster-node -- bash -c 'cat /host/var/tmp/my-tcpdump-capture-file.pcap' > /tmp/my-tcpdump-capture-file.pcap (1)
1 | The debug container mounts the host’s root directory at /host . Reference the absolute path from the debug container’s root directory, including /host , when specifying target files for concatenation. |
OpenShift Container Platform 4.8 cluster nodes running Red Hat Enterprise Linux CoreOS (RHCOS) are immutable and rely on Operators to apply cluster changes. Transferring a |
Navigate to an existing support case within https://access.redhat.com/support/cases/.
Select Attach files and follow the prompts to upload the file.
When investigating OpenShift Container Platform issues, Red Hat Support might ask you to upload diagnostic data to a support case. Files can be uploaded to a support case through the Red Hat Customer Portal, or from an OpenShift Container Platform cluster directly by using the redhat-support-tool
command.
You have access to the cluster as a user with the cluster-admin
role.
You have SSH access to your hosts.
You have installed the OpenShift CLI (oc
).
You have a Red Hat standard or premium Subscription.
You have a Red Hat Customer Portal account.
You have an existing Red Hat Support case ID.
Upload diagnostic data to an existing Red Hat support case through the Red Hat Customer Portal.
Concatenate a diagnostic file contained on an OpenShift Container Platform node by using the oc debug node/<node_name>
command and redirect the output to a file. The following example copies /host/var/tmp/my-diagnostic-data.tar.gz
from a debug container to /var/tmp/my-diagnostic-data.tar.gz
:
$ oc debug node/my-cluster-node -- bash -c 'cat /host/var/tmp/my-diagnostic-data.tar.gz' > /var/tmp/my-diagnostic-data.tar.gz (1)
1 | The debug container mounts the host’s root directory at /host . Reference the absolute path from the debug container’s root directory, including /host , when specifying target files for concatenation. |
OpenShift Container Platform 4.8 cluster nodes running Red Hat Enterprise Linux CoreOS (RHCOS) are immutable and rely on Operators to apply cluster changes. Transferring files from a cluster node by using |
Navigate to an existing support case within https://access.redhat.com/support/cases/.
Select Attach files and follow the prompts to upload the file.
Upload diagnostic data to an existing Red Hat support case directly from an OpenShift Container Platform cluster.
Obtain a list of cluster nodes:
$ oc get nodes
Enter into a debug session on the target node. This step instantiates a debug pod called <node_name>-debug
:
$ oc debug node/my-cluster-node
Set /host
as the root directory within the debug shell. The debug pod mounts the host’s root file system in /host
within the pod. By changing the root directory to /host
, you can run binaries contained in the host’s executable paths:
# chroot /host
OpenShift Container Platform 4.8 cluster nodes running Red Hat Enterprise Linux CoreOS (RHCOS) are immutable and rely on Operators to apply cluster changes. Accessing cluster nodes using SSH is not recommended and nodes will be tainted as accessed. However, if the OpenShift Container Platform API is not available, or the kubelet is not properly functioning on the target node, |
Start a toolbox
container, which includes the required binaries to run redhat-support-tool
:
# toolbox
If an existing |
Run redhat-support-tool
to attach a file from the debug pod directly to an existing Red Hat Support case. This example uses support case ID '01234567' and example file path /host/var/tmp/my-diagnostic-data.tar.gz
:
# redhat-support-tool addattachment -c 01234567 /host/var/tmp/my-diagnostic-data.tar.gz (1)
1 | The toolbox container mounts the host’s root directory at /host . Reference the absolute path from the toolbox container’s root directory, including /host/ , when specifying files to upload through the redhat-support-tool command. |
toolbox
toolbox
is a tool that starts a container on a Red Hat Enterprise Linux CoreOS (RHCOS) system. The tool is primarily used to start a container that includes the required binaries and plugins that are needed to run commands such as sosreport
and redhat-support-tool
.
The primary purpose for a toolbox
container is to gather diagnostic information and to provide it to Red Hat Support. However, if additional diagnostic tools are required, you can add RPM packages or run an image that is an alternative to the standard support tools image.
toolbox
containerBy default, running the toolbox
command starts a container with the registry.redhat.io/rhel8/support-tools:latest
image. This image contains the most frequently used support tools. If you need to collect node-specific data that requires a support tool that is not part of the image, you can install additional packages.
You have accessed a node with the oc debug node/<node_name>
command.
Set /host
as the root directory within the debug shell. The debug pod mounts the host’s root file system in /host
within the pod. By changing the root directory to /host
, you can run binaries contained in the host’s executable paths:
# chroot /host
Start the toolbox container:
# toolbox
Install the additional package, such as wget
:
# dnf install -y <package_name>
toolbox
By default, running the toolbox
command starts a container with the registry.redhat.io/rhel8/support-tools:latest
image. You can start an alternative image by creating a .toolboxrc
file and specifying the image to run.
You have accessed a node with the oc debug node/<node_name>
command.
Set /host
as the root directory within the debug shell. The debug pod mounts the host’s root file system in /host
within the pod. By changing the root directory to /host
, you can run binaries contained in the host’s executable paths:
# chroot /host
Create a .toolboxrc
file in the home directory for the root user ID:
# vi ~/.toolboxrc
REGISTRY=quay.io (1)
IMAGE=fedora/fedora:33-x86_64 (2)
TOOLBOX_NAME=toolbox-fedora-33 (3)
1 | Optional: Specify an alternative container registry. |
2 | Specify an alternative image to start. |
3 | Optional: Specify an alternative name for the toolbox container. |
Start a toolbox container with the alternative image:
# toolbox
If an existing |