This is a cache of https://docs.openshift.com/acs/3.65/upgrading/upgrading_roxctl/upgrade-from-44.html. It is a snapshot of the page at 2024-11-24T17:18:19.744+0000.
Upgrading from Red Hat Advanced Cluster Security for Kubernetes 3.0.44 or higher - Upgrading using the roxctl <strong>cli</strong> | Upgrading | Red Hat Advanced Cluster Security for Kubernetes 3.65
×

Upgrade to the latest version of Red Hat Advanced Cluster Security for Kubernetes from version 3.0.44 or higher.

To upgrade Red Hat Advanced Cluster Security for Kubernetes to the latest version, you must perform the following:

  • Backup the Central database

  • Upgrade the Central cluster

  • Upgrade all secured clusters

Backing up the Central database

You can back up the Central database and use that backup for rolling back from a failed upgrade or data restoration in the case of an infrastructure disaster.

Prerequisites
  • You must have an API token with read permission for all resources of Red Hat Advanced Cluster Security for Kubernetes. The Analyst system role has read permissions for all resources.

  • You have installed the roxctl cli.

  • You have configured the ROX_API_TOKEN and the ROX_CENTRAL_ADDRESS environment variables.

Procedure
  • Run the backup command:

    • For Red Hat Advanced Cluster Security for Kubernetes 3.0.55 and newer:

      $ roxctl -e "$ROX_CENTRAL_ADDRESS" central backup
    • For Red Hat Advanced Cluster Security for Kubernetes 3.0.54 and older:

      $ roxctl -e "$ROX_CENTRAL_ADDRESS" central db backup

Upgrading the Central cluster

After you have backed up Central database, the next step is to upgrade Central and Scanner.

Upgrading Central

You can update Central to the latest version by downloading and deploying the updated images.

Prerequisites
  • If you deploy images from a private image registry, first push the new image into your private registry, and then replace your image registry for the commands in this section.

  • If you used Red Hat UBI-based images when you installed Red Hat Advanced Cluster Security for Kubernetes, replace the image names for the commands in this section with the following UBI-based image names:

    • For Central, Sensor, and Compliance, use \registry.redhat.io/rh-acs/main-rhel

    • For Scanner, use \registry.redhat.io/rh-acs/scanner-rhel and \registry.redhat.io/rh-acs/scanner-db-rhel

    • For Collector, use \registry.redhat.io/rh-acs/collector-rhel

Procedure
  • Run the following commands to upgrade Central:

    $ oc -n stackrox patch deploy/central -p '{"spec":{"template":{"spec":{"containers":[{"name":"central","env":[{"name":"ROX_NAMESPACE","valueFrom":{"fieldRef":{"fieldPath":"metadata.namespace"}}}]}]}}}}' (1)
    1 If you use Kubernetes, enter kubectl instead of oc.
    $ oc -n stackrox set image deploy/central central=registry.redhat.io/rh-acs/main:3.65.1 (1)
    1 If you use Kubernetes, enter kubectl instead of oc.

    If you are upgrading from Red Hat Advanced Cluster Security for Kubernetes 3.65.0, you must run the following additional command to create the stackrox-central-diagnostics role:

    $ oc -n stackrox patch role stackrox-central-diagnostics -p '{"rules":[{"apiGroups":["*"],"resources":["deployments","daemonsets","replicasets","configmaps","services"],"verbs":["get","list"]}]}' (1)
    1 If you use Kubernetes, enter kubectl instead of oc.
Verification
  • Check that the new pods have deployed:

    $ oc get deploy -n stackrox -o wide (1)
    1 If you use Kubernetes, enter kubectl instead of oc.
    $ oc get pod -n stackrox --watch (1)
    1 If you use Kubernetes, enter kubectl instead of oc.

Upgrading the roxctl cli

To upgrade the roxctl cli to the latest version, download and install the latest version.

Installing the roxctl cli on Linux

You can install the roxctl cli binary on Linux by using the following procedure.

Procedure
  1. Download the latest version of the roxctl cli:

    $ curl -O https://mirror.openshift.com/pub/rhacs/assets/3.65.1/bin/Linux/roxctl
  2. Make the roxctl binary executable:

    $ chmod +x roxctl
  3. Place the roxctl binary in a directory that is on your PATH:

    To check your PATH, execute the following command:

    $ echo $PATH
Verification
  • Verify the roxctl version you have installed:

    $ roxctl version

Installing the roxctl cli on macOS

You can install the roxctl cli binary on macOS by using the following procedure.

Procedure
  1. Download the latest version of the roxctl cli:

    $ curl -O https://mirror.openshift.com/pub/rhacs/assets/3.65.1/bin/Darwin/roxctl
  2. Remove all extended attributes from the binary:

    $ xattr -c roxctl
  3. Make the roxctl binary executable:

    $ chmod +x roxctl
  4. Place the roxctl binary in a directory that is on your PATH:

    To check your PATH, execute the following command:

    $ echo $PATH
Verification
  • Verify the roxctl version you have installed:

    $ roxctl version

Installing the roxctl cli on Windows

You can install the roxctl cli binary on Windows by using the following procedure.

Procedure
  • Download the latest version of the roxctl cli:

    $ curl -O https://mirror.openshift.com/pub/rhacs/assets/3.65.1/bin/Windows/roxctl.exe
Verification
  • Verify the roxctl version you have installed:

    $ roxctl version

Upgrading Scanner

You can update Scanner to the latest version by using the roxctl cli.

Prerequisites
  • If you deploy images from a private image registry, first push the new image into your private registry, and then replace your image registry for the commands in this section.

  • If you used Red Hat UBI-based images when you installed Red Hat Advanced Cluster Security for Kubernetes, replace the image names for the commands in this section with the following UBI-based image names:

    • For Central, Sensor, and Compliance, use \registry.redhat.io/rh-acs/main-rhel

    • For Scanner, use \registry.redhat.io/rh-acs/scanner-rhel and \registry.redhat.io/rh-acs/scanner-db-rhel

    • For Collector, use \registry.redhat.io/rh-acs/collector-rhel

  • If you have created custom scanner configurations, you must apply those changes before updating the scanner configuration file:

    $ roxctl -e "$ROX_CENTRAL_ADDRESS" scanner generate
    $ oc apply -f scanner-bundle/scanner/02-scanner-03-tls-secret.yaml (1)
    1 If you use Kubernetes, enter kubectl instead of oc.
    $ oc apply -f scanner-bundle/scanner/02-scanner-04-scanner-config.yaml (1)
    1 If you use Kubernetes, enter kubectl instead of oc.
Procedure
  1. Apply the patch for Scanner:

    $ oc -n stackrox patch hpa/scanner -p '{"spec":{"minReplicas":2}}' (1)
    1 If you use Kubernetes, enter kubectl instead of oc.
  2. Update the Scanner image:

    $ oc -n stackrox set image deploy/scanner scanner=registry.redhat.io/rh-acs/scanner:2.19.1 (1)
    1 If you use Kubernetes, enter kubectl instead of oc.
  3. Update the Scanner database image:

    $ oc -n stackrox set image deploy/scanner-db db=registry.redhat.io/rh-acs/scanner-db:2.19.1 (1)
    1 If you use Kubernetes, enter kubectl instead of oc.
    $ oc -n stackrox set image deploy/scanner-db init-db=registry.redhat.io/rh-acs/scanner-db:2.19.1 (1)
    1 If you use Kubernetes, enter kubectl instead of oc.
Verification
  • Check that the new pods have deployed:

    $ oc get deploy -n stackrox -o wide (1)
    1 If you use Kubernetes, enter kubectl instead of oc.
    $ oc get pod -n stackrox --watch (1)
    1 If you use Kubernetes, enter kubectl instead of oc.

Verifying Central cluster upgrade

After you have upgraded both Central and Scanner, verify that the Central cluster upgrade is complete.

Procedure
  • Check the Central logs:

    $ oc logs -n stackrox deploy/central -c central (1)
    1 If you use Kubernetes, enter kubectl instead of oc.

    If the upgrade is successful, you will see output similar to the following:

    No database restore directory found (this is not an error).
    Migrator: 2019/10/25 17:58:54: starting DB compaction
    Migrator: 2019/10/25 17:58:54: Free fraction of 0.0391 (40960/1048576) is < 0.7500. Will not compact
    badger 2019/10/25 17:58:54 INFO: All 1 tables opened in 2ms
    badger 2019/10/25 17:58:55 INFO: Replaying file id: 0 at offset: 846357
    badger 2019/10/25 17:58:55 INFO: Replay took: 50.324µs
    badger 2019/10/25 17:58:55 DEBUG: Value log discard stats empty
    Migrator: 2019/10/25 17:58:55: DB is up to date. Nothing to do here.
    badger 2019/10/25 17:58:55 INFO: Got compaction priority: {level:0 score:1.73 dropPrefix:[]}
    version: 2019/10/25 17:58:55.189866 ensure.go:49: Info: Version found in the DB was current. We’re good to go!

Upgrading all secured clusters

After upgrading Central services, you must upgrade all secured clusters.

  • If you are using automatic upgrades:

  • If you are not using automatic upgrades, you must run the instructions in this section on all secured clusters including the Central cluster.

To complete manual upgrades of each secured cluster running Sensor, Collector, and Admission Controller, follow the instructions in this section.

Update readiness probes

If you are upgrading from Red Hat Advanced Cluster Security for Kubernetes 3.65.0, you must run the following additional command to update the readiness probe path.

Procedure
  • Update the readiness probe path:

    $ oc -n stackrox patch deploy/sensor -p '{"spec":{"template":{"spec":{"containers":[{"name":"sensor","readinessProbe":{"httpGet":{"path":"/ready"}}}]}}}}' (1)
    1 If you use Kubernetes, enter kubectl instead of oc.

Updating Admission Controller

Red Hat Advanced Cluster Security for Kubernetes 3.0.55 includes changes that affect admission controller integration. Therefore, if you are using an admission controller or want to take advantage of the admission controller integration’s new features, you must run some additional commands to upgrade the secured cluster.

If you are upgrading from a Red Hat Advanced Cluster Security for Kubernetes version between 3.0.44 and 3.0.54, and you are not using an Admission Controller and automatic upgrades, then you must manually redeploy Sensor on all clusters.

Prerequisites
  • You must be using the Red Hat Advanced Cluster Security for Kubernetes Admission Controller. To check if you are using the Admission Controller, run the following command:

    $ oc get validatingwebhookconfiguration stackrox (1)
    1 If you use Kubernetes, enter kubectl instead of oc.

    If you get an error, it means that you are not using an admission controller.

Procedure
  1. Delete the existing validating webhook configuration:

    $ oc delete validatingwebhookconfiguration stackrox (1)
    1 If you use Kubernetes, enter kubectl instead of oc.

    If you want to allow any additional admission controller features moving forward, navigate to the Clusters configuration view on the RHACS portal, select the cluster you are updating, and enable the appropriate options.

  2. Obtain a new Sensor bundle for the secured cluster:

    $ roxctl sensor get-bundle <cluster_name>
  3. Unzip the Sensor bundle and open the extracted directory.

  4. Create the admission-control deployment and related objects:

    $ oc -n stackrox apply -f admission-controller-scc.yaml (1)
    1 If you use Kubernetes, enter kubectl instead of oc.
    $ oc -n stackrox apply -f admission-controller-secret.yaml (1)
    1 If you use Kubernetes, enter kubectl instead of oc.
    $ oc -n stackrox apply -f admission-controller-rbac.yaml (1)
    1 If you use Kubernetes, enter kubectl instead of oc.
    $ oc -n stackrox apply -f admission-controller-netpol.yaml (1)
    1 If you use Kubernetes, enter kubectl instead of oc.
    $ oc -n stackrox apply -f admission-controller-pod-security.yaml (1)
    1 If you use Kubernetes, enter kubectl instead of oc.
    $ oc -n stackrox apply -f admission-controller.yaml (1)
    1 If you use Kubernetes, enter kubectl instead of oc.

By default, the --listenOnEvents option is set to false during the upgrade. It controls the deployment of the admission controller webhook, which listens for exec and portforward events. If you are using OpenShift Container Platform version 3.11, do not set the --listenOnEvents option to true. Since these events are not available for OpenShift Container Platform 3.11, enabling them causes errors.

Updating OpenShift security context constraints

Depending on the version of Red Hat Advanced Cluster Security for Kubernetes you are upgrading to, you must update certain OpenShift Container Platform security context constraints (SCCs).

Run the commands in this section only if you are using Red Hat Advanced Cluster Security for Kubernetes with OpenShift Container Platform. Otherwise, skip the instructions in this section.

Procedure
  • Red Hat Advanced Cluster Security for Kubernetes 3.0.57.2 introduces changes to the SCCs. If you are upgrading to Red Hat Advanced Cluster Security for Kubernetes 3.0.57.2 or higher, you must patch the SCCs by running the following command:

    $ oc patch --type merge scc scanner -p '{"priority":0}'
  • Red Hat Advanced Cluster Security for Kubernetes 3.64.0 renames the SCCs. If you are upgrading to Red Hat Advanced Cluster Security for Kubernetes 3.64.0 or higher, you must delete and reapply the SCCs:

    1. Run the following commands to update Central:

      $ oc apply -f - <<EOF
      kind: SecurityContextConstraints
      apiVersion: security.openshift.io/v1
      metadata:
        name: stackrox-central
        labels:
          app.kubernetes.io/name: stackrox
        annotations:
          kubernetes.io/description: stackrox-central is the security constraint for the central server
          email: support@stackrox.com
          owner: stackrox
      allowHostDirVolumePlugin: false
      allowedCapabilities: []
      allowHostIPC: false
      allowHostNetwork: false
      allowHostPID: false
      allowHostPorts: false
      allowPrivilegeEscalation: false
      allowPrivilegedContainer: false
      defaultAddCapabilities: []
      fsGroup:
        type: MustRunAs
        ranges:
          - max: 4000
            min: 4000
      priority: 0
      readOnlyRootFilesystem: true
      requiredDropCapabilities: []
      runAsUser:
        type: MustRunAs
        uid: 4000
      seLinuxContext:
        type: MustRunAs
      seccompProfiles:
        - '*'
      users:
        - system:serviceaccount:stackrox:central
      volumes:
        - '*'
      EOF
      $ oc delete scc central
    2. Run the following commands to update Scanner:

      $ oc apply -f - <<EOF
      kind: SecurityContextConstraints
      apiVersion: security.openshift.io/v1
      metadata:
        name: stackrox-scanner
        labels:
          app.kubernetes.io/name: stackrox
        annotations:
          email: support@stackrox.com
          owner: stackrox
          kubernetes.io/description: stackrox-scanner is the security constraint for the Scanner container
      priority: 0
      runAsUser:
        type: RunAsAny
      seLinuxContext:
        type: RunAsAny
      seccompProfiles:
        - '*'
      users:
        - system:serviceaccount:stackrox:scanner
      volumes:
        - '*'
      allowHostDirVolumePlugin: false
      allowedCapabilities: []
      allowHostIPC: false
      allowHostNetwork: false
      allowHostPID: false
      allowHostPorts: false
      allowPrivilegeEscalation: false
      allowPrivilegedContainer: false
      defaultAddCapabilities: []
      fsGroup:
        type: RunAsAny
      readOnlyRootFilesystem: false
      requiredDropCapabilities: []
      EOF
      $ oc delete scc scanner
    3. Run the following commands to update Secured Cluster:

      $ oc apply -f - <<EOF
      apiVersion: security.openshift.io/v1
      kind: SecurityContextConstraints
      metadata:
        name: stackrox-admission-control
        labels:
          app.kubernetes.io/name: stackrox
          auto-upgrade.stackrox.io/component: "sensor"
        annotations:
          email: support@stackrox.com
          owner: stackrox
          kubernetes.io/description: stackrox-admission-control is the security constraint for the admission controller
      users:
        - system:serviceaccount:stackrox:admission-control
      priority: 0
      runAsUser:
        type: RunAsAny
      seLinuxContext:
        type: RunAsAny
      seccompProfiles:
        - '*'
      supplementalGroups:
        type: RunAsAny
      fsGroup:
        type: RunAsAny
      groups: []
      readOnlyRootFilesystem: true
      allowHostDirVolumePlugin: false
      allowHostIPC: false
      allowHostNetwork: false
      allowHostPID: false
      allowHostPorts: false
      allowPrivilegeEscalation: false
      allowPrivilegedContainer: false
      allowedCapabilities: []
      defaultAddCapabilities: []
      requiredDropCapabilities: []
      volumes:
        - configMap
        - downwardAPI
        - emptyDir
        - secret
      ---
      apiVersion: security.openshift.io/v1
      kind: SecurityContextConstraints
      metadata:
        name: stackrox-collector
        labels:
          app.kubernetes.io/name: stackrox
          auto-upgrade.stackrox.io/component: "sensor"
        annotations:
          email: support@stackrox.com
          owner: stackrox
          kubernetes.io/description: This SCC is based on privileged, hostaccess, and hostmount-anyuid
      users:
        - system:serviceaccount:stackrox:collector
      allowHostDirVolumePlugin: true
      allowPrivilegedContainer: true
      fsGroup:
        type: RunAsAny
      groups: []
      priority: 0
      readOnlyRootFilesystem: true
      runAsUser:
        type: RunAsAny
      seLinuxContext:
        type: RunAsAny
      seccompProfiles:
        - '*'
      supplementalGroups:
        type: RunAsAny
      allowHostIPC: false
      allowHostNetwork: false
      allowHostPID: false
      allowHostPorts: false
      allowPrivilegeEscalation: true
      allowedCapabilities: []
      defaultAddCapabilities: []
      requiredDropCapabilities: []
      volumes:
        - configMap
        - downwardAPI
        - emptyDir
        - hostPath
        - secret
      ---
      apiVersion: security.openshift.io/v1
      kind: SecurityContextConstraints
      metadata:
        name: stackrox-sensor
        labels:
          app.kubernetes.io/name: stackrox
          auto-upgrade.stackrox.io/component: "sensor"
        annotations:
          email: support@stackrox.com
          owner: stackrox
          kubernetes.io/description: stackrox-sensor is the security constraint for the sensor
      users:
        - system:serviceaccount:stackrox:sensor
        - system:serviceaccount:stackrox:sensor-upgrader
      priority: 0
      runAsUser:
        type: RunAsAny
      seLinuxContext:
        type: RunAsAny
      seccompProfiles:
        - '*'
      supplementalGroups:
        type: RunAsAny
      fsGroup:
        type: RunAsAny
      groups: []
      readOnlyRootFilesystem: true
      allowHostDirVolumePlugin: false
      allowHostIPC: false
      allowHostNetwork: false
      allowHostPID: false
      allowHostPorts: false
      allowPrivilegeEscalation: true
      allowPrivilegedContainer: false
      allowedCapabilities: []
      defaultAddCapabilities: []
      requiredDropCapabilities: []
      volumes:
        - configMap
        - downwardAPI
        - emptyDir
        - secret
      EOF
      $ oc delete scc admission-control collector sensor

Updating other images

If you are upgrading Red Hat Advanced Cluster Security for Kubernetes version between 3.0.44 and 3.0.54, you must update the Sensor and Collector images.

Prerequisites
  • You must be using Red Hat Advanced Cluster Security for Kubernetes version between 3.0.44 and 3.0.54.

If you are using Kubernetes, use kubectl instead of oc for the commands listed in this procedure.

Procedure
  1. Update the Sensor image:

    $ oc -n stackrox set image deploy/sensor sensor=registry.redhat.io/rh-acs/main:3.65.1 (1)
    1 If you use Kubernetes, enter kubectl instead of oc.
  2. Update the Compliance image:

    $ oc -n stackrox set image ds/collector compliance=registry.redhat.io/rh-acs/main:3.65.1 (1)
    1 If you use Kubernetes, enter kubectl instead of oc.
  3. Update the Collector image:

    $ oc -n stackrox set image ds/collector collector=registry.redhat.io/rh-acs/collector:3.3.1-latest (1)
    1 If you use Kubernetes, enter kubectl instead of oc.
  4. Apply the patch for Sensor:

    Applying the patch for Sensor is only required when you are upgrading from a Red Hat Advanced Cluster Security for Kubernetes version that is between 3.0.44 and 3.0.54. Otherwise, skip this step.

    $ oc -n stackrox patch deploy/sensor -p '{"spec":{"template":{"spec":{"containers":[{"name":"sensor","env":[{"name":"POD_NAMESPACE","valueFrom":{"fieldRef":{"fieldPath":"metadata.namespace"}}}],"volumeMounts":[{"name":"cache","mountPath":"/var/cache/stackrox"}]}],"volumes":[{"name":"cache","emptyDir":{}}]}}}}' (1)
    1 If you use Kubernetes, enter kubectl instead of oc.
  5. Apply the following cluster role and cluster role binding:

    Applying the cluster role and the cluster role binding is only required when you are upgrading from a Red Hat Advanced Cluster Security for Kubernetes version that is between 3.0.44 and 3.0.54. Otherwise, skip this step.

    $ oc -n stackrox apply -f - <<EOF (1)
    kind: ClusterRole
    apiVersion: rbac.authorization.k8s.io/v1
    metadata:
      name: stackrox:review-tokens
      labels:
        app.kubernetes.io/name: stackrox
        auto-upgrade.stackrox.io/component: "sensor"
      annotations:
        owner: stackrox
        email: "support@stackrox.com"
    rules:
    - resources:
      - tokenreviews
      apiGroups: ["authentication.k8s.io"]
      verbs:
      - create
    ---
    kind: ClusterRoleBinding
    apiVersion: rbac.authorization.k8s.io/v1
    metadata:
      name: stackrox:review-tokens-binding
      labels:
        app.kubernetes.io/name: stackrox
        auto-upgrade.stackrox.io/component: "sensor"
      annotations:
        owner: stackrox
        email: "support@stackrox.com"
    subjects:
    - kind: ServiceAccount
      name: sensor
      namespace: stackrox
    roleRef:
      kind: ClusterRole
      name: stackrox:review-tokens
      apiGroup: rbac.authorization.k8s.io
    EOF
    1 If you use Kubernetes, enter kubectl instead of oc.

Verifying secured cluster upgrade

After you have upgraded secured clusters, verify that the updated pods are working.

Procedure
  • Check that the new pods have deployed:

    $ oc get deploy,ds -n stackrox -o wide (1)
    1 If you use Kubernetes, enter kubectl instead of oc.
    $ oc get pod -n stackrox --watch (1)
    1 If you use Kubernetes, enter kubectl instead of oc.

Rolling back Central

You can roll back to a previous version of Central if the upgrade to a new version is unsuccessful.

Rolling back Central normally

You can roll back to a previous version of Central if upgrading Red Hat Advanced Cluster Security for Kubernetes fails.

Prerequisites
  • You must be using Red Hat Advanced Cluster Security for Kubernetes 3.0.57.0 or higher.

  • Before you can perform a rollback, you must have free disk space available on your persistent storage. Red Hat Advanced Cluster Security for Kubernetes uses disk space to keep a copy of databases during the upgrade. If the disk space is not enough to store a copy and the upgrade fails, you will not be able to roll back to an earlier version.

Procedure
  • Run the following command to roll back to a previous version when an upgrade fails (before the Central service starts):

    $ oc -n stackrox rollout undo deploy/central (1)
    1 If you use Kubernetes, enter kubectl instead of oc.

Rolling back Central forcefully

You can use forced rollback to roll back to an earlier version of Central (after the Central service starts).

Using forced rollback to switch back to a previous version might result in loss of data and functionality.

Prerequisites
  • You must be using Red Hat Advanced Cluster Security for Kubernetes 3.0.58.0 or higher.

  • Before you can perform a rollback, you must have free disk space available on your persistent storage. Red Hat Advanced Cluster Security for Kubernetes uses disk space to keep a copy of databases during the upgrade. If the disk space is not enough to store a copy and the upgrade fails, you will not be able to roll back to an earlier version.

Procedure
  • Run the following commands to perform a forced rollback:

    • To forcefully rollback to the previously installed version:

      $ oc -n stackrox rollout undo deploy/central (1)
      1 If you use Kubernetes, enter kubectl instead of oc.
    • To forcefully rollback to a specific version:

      1. Edit Central’s ConfigMap:

        $ oc -n stackrox edit configmap/central-config (1)
        1 If you use Kubernetes, enter kubectl instead of oc.
      2. Update the value of the maintenance.forceRollbackVersion key:

        data:
          central-config.yaml: |
            maintenance:
              safeMode: false
              compaction:
                 enabled: true
                 bucketFillFraction: .5
                 freeFractionThreshold: 0.75
              forceRollbackVersion: <x.x.x.x> (1)
          ...
        1 Specify the version that you want to roll back to.
      3. Update the Central image version:

        $ oc -n stackrox \ (1)
          set image deploy/central central=registry.redhat.io/rh-acs/main:<x.x.x.x> (2)
        
        1 If you use Kubernetes, enter kubectl instead of oc.
        2 Specify the version that you want to roll back to. It must be the same version that you specified for the maintenance.forceRollbackVersion key in the central-config config map.

Verifying last check-in time for secured clusters

The updated Sensors and Collectors continue to report the latest data from each secured cluster.

The last time Sensor contacted Central is visible in the RHACS portal.

Procedure
  1. On the RHACS portal, navigate to Platform ConfigurationClusters.

  2. The cluster list shows the Last check-in time for each cluster.

  • If any Sensor has not checked in for more than five minutes, check the cluster logs for that Sensor to ensure that it is operating as usual.

  • The displayed check-in time does not update automatically. You must reload the page to see the updates.

Revoking the API token

For security reasons, Red Hat recommends that you revoke the API token that you have used to complete Central database backup.

Prerequisites
  • After the upgrade, you must reload the RHACS portal page and re-accept the certificate to continue using the RHACS portal.

Procedure
  1. On the RHACS portal, navigate to Platform ConfigurationIntegrations.

  2. Scroll down to the Authentication Tokens category, and click API Token.

  3. Select the checkbox in front of the token name that you want to revoke.

  4. click Revoke.

  5. On the confirmation dialog box, click Confirm.