This is a cache of https://docs.openshift.com/acs/4.3/upgrading/upgrade-roxctl.html. It is a snapshot of the page at 2024-11-22T17:58:11.194+0000.
Upgrading using the roxctl CLI | Upgrading | Red Hat Advanced Cluster Security for Kubernetes 4.3
×

You can upgrade to the latest version of Red Hat Advanced Cluster Security for Kubernetes (RHACS) from a supported older version.

  • You need to perform the manual upgrade procedure only if you used the roxctl CLI to install RHACS.

  • There are manual steps for each version upgrade that must be followed, for example, from version 3.74 to version 4.0, and from version 4.0 to version 4.1. Therefore, Red Hat recommends upgrading first from 3.74 to 4.0, then from 4.0 to 4.1, then 4.1 to 4.2, until the selected version is installed. For full functionality, Red Hat recommends upgrading to the most recent version.

To upgrade Red Hat Advanced Cluster Security for Kubernetes to the latest version, you must perform the following:

  • Backup the Central database

  • upgrade the roxctl CLI

  • Generate Central database provisioning bundle

  • upgrade Central

  • upgrade Scanner

  • Verify all the upgraded secured clusters

Backing up the Central database

You can back up the Central database and use that backup for rolling back from a failed upgrade or data restoration in the case of an infrastructure disaster.

Prerequisites
  • You must have an API token with read permission for all resources of Red Hat Advanced Cluster Security for Kubernetes. The Analyst system role has read permissions for all resources.

  • You have installed the roxctl CLI.

  • You have configured the ROX_API_TOKEN and the ROX_CENTRAL_ADDRESS environment variables.

Procedure
  • Run the backup command:

    $ roxctl -e "$ROX_CENTRAL_ADDRESS" central backup

Upgrading the roxctl CLI

To upgrade the roxctl CLI to the latest version you must uninstall the existing version of roxctl CLI and then install the latest version of the roxctl CLI.

Uninstalling the roxctl CLI

You can uninstall the roxctl CLI binary on Linux by using the following procedure.

Procedure
  • Find and delete the roxctl binary:

    $ ROXPATH=$(which roxctl) && rm -f $ROXPATH (1)
    1 Depending on your environment, you might need administrator rights to delete the roxctl binary.

Installing the roxctl CLI on Linux

You can install the roxctl CLI binary on Linux by using the following procedure.

roxctl CLI for Linux is available for amd64, ppc64le, and s390x architectures.

Procedure
  1. Determine the roxctl architecture for the target operating system:

    $ arch="$(uname -m | sed "s/x86_64//")"; arch="${arch:+-$arch}"
  2. Download the roxctl CLI:

    $ curl -f -o roxctl "https://mirror.openshift.com/pub/rhacs/assets/4.3.8/bin/Linux/roxctl${arch}"
  3. Make the roxctl binary executable:

    $ chmod +x roxctl
  4. Place the roxctl binary in a directory that is on your PATH:

    To check your PATH, execute the following command:

    $ echo $PATH
Verification
  • Verify the roxctl version you have installed:

    $ roxctl version

Installing the roxctl CLI on macOS

You can install the roxctl CLI binary on macOS by using the following procedure.

roxctl CLI for macOS is available for the amd64 architecture.

Procedure
  1. Download the roxctl CLI:

    $ curl -f -O https://mirror.openshift.com/pub/rhacs/assets/4.3.8/bin/Darwin/roxctl
  2. Remove all extended attributes from the binary:

    $ xattr -c roxctl
  3. Make the roxctl binary executable:

    $ chmod +x roxctl
  4. Place the roxctl binary in a directory that is on your PATH:

    To check your PATH, execute the following command:

    $ echo $PATH
Verification
  • Verify the roxctl version you have installed:

    $ roxctl version

Installing the roxctl CLI on Windows

You can install the roxctl CLI binary on Windows by using the following procedure.

roxctl CLI for Windows is available for the amd64 architecture.

Procedure
  • Download the roxctl CLI:

    $ curl -f -O https://mirror.openshift.com/pub/rhacs/assets/4.3.8/bin/Windows/roxctl.exe
Verification
  • Verify the roxctl version you have installed:

    $ roxctl version

Generating Central database provisioning bundle

Before upgrading Central you must first generate a database provisioning bundle. This bundle is a tar archive that has a README file, a few YAML configuration files, and some scripts that aid in the installation process.

Prerequisites
  • You must have an API token with the Admin role.

  • You must have installed the roxctl CLI.

Procedure
  1. Set the ROX_API_TOKEN and the ROX_CENTRAL_ADDRESS environment variables:

    $ export ROX_API_TOKEN=<api_token>
    $ export ROX_CENTRAL_ADDRESS=<address>:<port_number>
  2. Run the central db generate command:

    $ roxctl -e $ROX_CENTRAL_ADDRESS central db generate \
      <cluster_type> \ (1)
      <storage> \ (2)
      --output-dir <bundle_dir> \ (3)
      --central-db-image registry.redhat.io/advanced-cluster-security/rhacs-central-db-rhel8:4.3.8
    1 cluster-type is the type of your cluster, specify k8s for Kubernetes and openshift for OpenShift Container Platform.
    2 For storage, specify hostpath or pvc. If you use pvc you can use additional options to specify volume name, size, and storage class. Run $ roxctl central db generate openshift pvc -h for more details.
    3 For bundle-dir specify the path where you want to save the generated provisioning bundle.
Next Step
  • Use the Central DB provisioning bundle to create additional resources.

Creating resources by using the Central DB provisioning bundle

Before you upgrade the Central cluster, you must use the Central DB provisioning bundle to create additional resources that the Central cluster requires. This bundle is a tar archive that has a README file, a few YAML configuration files, and some scripts that aid in the installation process.

Prerequisites
  • You must have generated a Central DB provisioning bundle.

  • You must have extracted the tar archive bundle.

Procedure
  1. Open the extracted bundle directory and run the setup script:

    $ ./scripts/setup.sh
  2. Run the deploy-central-db script:

    $ ./deploy-central-db.sh

Upgrading the Central cluster

After you have created a backup of the Central database and generated the necessary resources by using the provisioning bundle, the next step is to upgrade the Central cluster. This process involves upgrading Central and Scanner.

Upgrading Central

You can update Central to the latest version by downloading and deploying the updated images.

Procedure
  • Run the following command to update the Central image:

    $ oc -n stackrox set image deploy/central central=registry.redhat.io/advanced-cluster-security/rhacs-main-rhel8:4.3.8 (1)
    1 If you use Kubernetes, enter kubectl instead of oc.
Verification
  • Verify that the new pods have deployed:

    $ oc get deploy -n stackrox -o wide
    $ oc get pod -n stackrox --watch

Upgrading Scanner

You can update Scanner to the latest version by downloading and deploying the updated images.

Procedure
  • Run the following command to update the Scanner image:

    $ oc -n stackrox set image deploy/scanner scanner=registry.redhat.io/advanced-cluster-security/rhacs-scanner-rhel8:4.3.8 (1)
    1 If you use Kubernetes, enter kubectl instead of oc.
Verification
  • Verify that the new pods have deployed:

    $ oc get deploy -n stackrox -o wide
    $ oc get pod -n stackrox --watch

Verifying the Central cluster upgrade

After you have upgraded both Central and Scanner, verify that the Central cluster upgrade is complete.

Procedure
  • Check the Central logs by running the following command:

    $ oc logs -n stackrox deploy/central -c central (1)
    1 If you use Kubernetes, enter kubectl instead of oc.
Sample output of a successful upgrade
No database restore directory found (this is not an error).
Migrator: 2023/04/19 17:58:54: starting DB compaction
Migrator: 2023/04/19 17:58:54: Free fraction of 0.0391 (40960/1048576) is < 0.7500. Will not compact
badger 2023/04/19 17:58:54 INFO: All 1 tables opened in 2ms
badger 2023/04/19 17:58:55 INFO: Replaying file id: 0 at offset: 846357
badger 2023/04/19 17:58:55 INFO: Replay took: 50.324µs
badger 2023/04/19 17:58:55 DEBUG: Value log discard stats empty
Migrator: 2023/04/19 17:58:55: DB is up to date. Nothing to do here.
badger 2023/04/19 17:58:55 INFO: Got compaction priority: {level:0 score:1.73 dropPrefix:[]}
version: 2023/04/19 17:58:55.189866 ensure.go:49: Info: Version found in the DB was current. We’re good to go!

Upgrading all secured clusters

After upgrading Central services, you must upgrade all secured clusters.

  • If you are using automatic upgrades:

    • Update all your secured clusters by using automatic upgrades.

    • Skip the instructions in this section and follow the instructions in the Verify upgrades and Revoking the API token sections.

  • If you are not using automatic upgrades, you must run the instructions in this section on all secured clusters including the Central cluster.

    • To ensure optimal functionality, use the same RHACS version for your secured clusters and the cluster on which Central is installed.

To complete manual upgrades of each secured cluster running Sensor, Collector, and Admission controller, follow the instructions in this section.

Updating other images

You must update the sensor, collector and compliance images on each secured cluster when not using automatic upgrades.

If you are using Kubernetes, use kubectl instead of oc for the commands listed in this procedure.

Procedure
  1. Update the Sensor image:

    $ oc -n stackrox set image deploy/sensor sensor=registry.redhat.io/advanced-cluster-security/rhacs-main-rhel8:4.3.8 (1)
    1 If you use Kubernetes, enter kubectl instead of oc.
  2. Update the Compliance image:

    $ oc -n stackrox set image ds/collector compliance=registry.redhat.io/advanced-cluster-security/rhacs-main-rhel8:4.3.8 (1)
    1 If you use Kubernetes, enter kubectl instead of oc.
  3. Update the Collector image:

    $ oc -n stackrox set image ds/collector collector=registry.redhat.io/advanced-cluster-security/rhacs-collector-rhel8:4.3.8 (1)
    1 If you use Kubernetes, enter kubectl instead of oc.

    If you are using the collector slim image, run the following command instead:

    $ oc -n stackrox set image ds/collector collector=registry.redhat.io/advanced-cluster-security/rhacs-collector-slim-rhel8:{rhacs-version}
  4. Update the admission control image:

    $ oc -n stackrox set image deploy/admission-control admission-control=registry.redhat.io/advanced-cluster-security/rhacs-main-rhel8:4.3.8

Verifying secured cluster upgrade

After you have upgraded secured clusters, verify that the updated pods are working.

Procedure
  1. Check that the new pods have deployed. Enter the following command:

    $ oc get deploy,ds -n stackrox -o wide (1)
    1 If you use Kubernetes, enter kubectl instead of oc.
  2. Enter the following command:

    $ oc get pod -n stackrox --watch (1)
    1 If you use Kubernetes, enter kubectl instead of oc.

Enabling RHCOS node scanning

If you use OpenShift Container Platform, you can enable scanning of Red Hat Enterprise Linux CoreOS (RHCOS) nodes for vulnerabilities by using Red Hat Advanced Cluster Security for Kubernetes (RHACS).

Prerequisites
Procedure
  1. Run one of the following commands to update the compliance container.

    • For a default compliance container with metrics disabled, run the following command:

      $ oc -n stackrox patch daemonset/collector -p '{"spec":{"template":{"spec":{"containers":[{"name":"compliance","env":[{"name":"ROX_METRICS_PORT","value":"disabled"},{"name":"ROX_NODE_SCANNING_ENDPOINT","value":"127.0.0.1:8444"},{"name":"ROX_NODE_SCANNING_INTERVAL","value":"4h"},{"name":"ROX_NODE_SCANNING_INTERVAL_DEVIATION","value":"24m"},{"name":"ROX_NODE_SCANNING_MAX_INITIAL_WAIT","value":"5m"},{"name":"ROX_RHCOS_NODE_SCANNING","value":"true"},{"name":"ROX_CALL_NODE_INVENTORY_ENABLED","value":"true"}]}]}}}}'
    • For a compliance container with Prometheus metrics enabled, run the following command:

      $ oc -n stackrox patch daemonset/collector -p '{"spec":{"template":{"spec":{"containers":[{"name":"compliance","env":[{"name":"ROX_METRICS_PORT","value":":9091"},{"name":"ROX_NODE_SCANNING_ENDPOINT","value":"127.0.0.1:8444"},{"name":"ROX_NODE_SCANNING_INTERVAL","value":"4h"},{"name":"ROX_NODE_SCANNING_INTERVAL_DEVIATION","value":"24m"},{"name":"ROX_NODE_SCANNING_MAX_INITIAL_WAIT","value":"5m"},{"name":"ROX_RHCOS_NODE_SCANNING","value":"true"},{"name":"ROX_CALL_NODE_INVENTORY_ENABLED","value":"true"}]}]}}}}'
  2. Update the Collector DaemonSet (DS) by taking the following steps:

    1. Add new volume mounts to Collector DS by running the following command:

      $ oc -n stackrox patch daemonset/collector -p '{"spec":{"template":{"spec":{"volumes":[{"name":"tmp-volume","emptyDir":{}},{"name":"cache-volume","emptyDir":{"sizeLimit":"200Mi"}}]}}}}'
    2. Add the new NodeScanner container by running the following command:

      $ oc -n stackrox patch daemonset/collector -p '{"spec":{"template":{"spec":{"containers":[{"command":["/scanner","--nodeinventory","--config=",""],"env":[{"name":"ROX_NODE_NAME","valueFrom":{"fieldRef":{"apiVersion":"v1","fieldPath":"spec.nodeName"}}},{"name":"ROX_CLAIR_V4_SCANNING","value":"true"},{"name":"ROX_COMPLIANCE_OPERATOR_INTEGRATION","value":"true"},{"name":"ROX_CSV_EXPORT","value":"false"},{"name":"ROX_DECLARATIVE_CONFIGURATION","value":"false"},{"name":"ROX_INTEGRATIONS_AS_CONFIG","value":"false"},{"name":"ROX_NETPOL_FIELDS","value":"true"},{"name":"ROX_NETWORK_DETECTION_BASELINE_SIMULATION","value":"true"},{"name":"ROX_NETWORK_GRAPH_PATTERNFLY","value":"true"},{"name":"ROX_NODE_SCANNING_CACHE_TIME","value":"3h36m"},{"name":"ROX_NODE_SCANNING_INITIAL_BACKOFF","value":"30s"},{"name":"ROX_NODE_SCANNING_MAX_BACKOFF","value":"5m"},{"name":"ROX_PROCESSES_LISTENING_ON_PORT","value":"false"},{"name":"ROX_QUAY_ROBOT_ACCOUNTS","value":"true"},{"name":"ROX_ROXCTL_NETPOL_GENERATE","value":"true"},{"name":"ROX_SOURCED_AUTOGENERATED_INTEGRATIONS","value":"false"},{"name":"ROX_SYSLOG_EXTRA_FIELDS","value":"true"},{"name":"ROX_SYSTEM_HEALTH_PF","value":"false"},{"name":"ROX_VULN_MGMT_WORKLOAD_CVES","value":"false"}],"image":"registry.redhat.io/advanced-cluster-security/rhacs-scanner-slim-rhel8:4.3.8","imagePullPolicy":"IfNotPresent","name":"node-inventory","ports":[{"containerPort":8444,"name":"grpc","protocol":"TCP"}],"volumeMounts":[{"mountPath":"/host","name":"host-root-ro","readOnly":true},{"mountPath":"/tmp/","name":"tmp-volume"},{"mountPath":"/cache","name":"cache-volume"}]}]}}}}'
Additional resources

Removing Central-attached PV after upgrading to version 4.1 and later

Kubernetes and OpenShift Container Platform do not delete persistent volumes (PV) automatically. When you upgrade RHACS from earlier versions, the Central PV called stackrox-db remains mounted. However, in RHACS 4.1, Central does not need the previously attached PV anymore.

The PV has data and persistent files used by earlier RHACS versions. You can use the PV to roll back to an earlier version before RHACS 4.1. Or, if you have a large RocksDB backup bundle for Central, you can use the PV to restore that data.

After you complete the upgrade to 4.1, you can remove the Central-attached persistent volume claim (PVC) to free up the storage. Only remove the PVC if you do not plan to roll back or restore from earlier RocksDB backups.

After removing PVC, you cannot roll back Central to an earlier version before RHACS 4.1 or restore large RocksDB backups created with RocksDB.

Removing Central-attached PV using the roxctl CLI

Remove the Central-attached persistent volume claim (PVC) stackrox-db to free up storage space.

Procedure
  • Run the following command:

    $ oc get deployment central -n stackrox -o json | jq '(.spec.template.spec.volumes[] | select(.name=="stackrox-db"))={"name": "stackrox-db", "emptyDir": {}}' | oc apply -f -

    It replaces the stackrox-db` entry in the spec.template.spec.volumes to a local emptyDir.

Verification
  • Run the following command:

    $ oc -n stackrox describe pvc stackrox-db | grep -i 'Used By'
    Used By: <none> (1)
    
    1 Wait until you see Used By: <none>. It might take a few minutes.

Rolling back Central

You can roll back to a previous version of Central if the upgrade to a new version is unsuccessful.

Rolling back Central normally

You can roll back to a previous version of Central if upgrading Red Hat Advanced Cluster Security for Kubernetes fails.

Prerequisites
  • Before you can perform a rollback, you must have free disk space available on your persistent storage. Red Hat Advanced Cluster Security for Kubernetes uses disk space to keep a copy of databases during the upgrade. If the disk space is not enough to store a copy and the upgrade fails, you might not be able to roll back to an earlier version.

Procedure
  • Run the following command to roll back to a previous version when an upgrade fails (before the Central service starts):

    $ oc -n stackrox rollout undo deploy/central (1)
    1 If you use Kubernetes, enter kubectl instead of oc.

Rolling back Central forcefully

You can use forced rollback to roll back to an earlier version of Central (after the Central service starts).

Using forced rollback to switch back to a previous version might result in loss of data and functionality.

Prerequisites
  • Before you can perform a rollback, you must have free disk space available on your persistent storage. Red Hat Advanced Cluster Security for Kubernetes uses disk space to keep a copy of databases during the upgrade. If the disk space is not enough to store a copy and the upgrade fails, you will not be able to roll back to an earlier version.

Procedure
  • Run the following commands to perform a forced rollback:

    • To forcefully rollback to the previously installed version:

      $ oc -n stackrox rollout undo deploy/central (1)
      1 If you use Kubernetes, enter kubectl instead of oc.
    • To forcefully rollback to a specific version:

      1. Edit Central’s ConfigMap:

        $ oc -n stackrox edit configmap/central-config (1)
        1 If you use Kubernetes, enter kubectl instead of oc.
      2. Update the value of the maintenance.forceRollbackVersion key:

        data:
          central-config.yaml: |
            maintenance:
              safeMode: false
              compaction:
                 enabled: true
                 bucketFillFraction: .5
                 freeFractionThreshold: 0.75
              forceRollbackVersion: <x.x.x.x> (1)
          ...
        1 Specify the version that you want to roll back to.
      3. Update the Central image version:

        $ oc -n stackrox \ (1)
          set image deploy/central central=registry.redhat.io/advanced-cluster-security/rhacs-main-rhel8:<x.x.x.x> (2)
        
        1 If you use Kubernetes, enter kubectl instead of oc.
        2 Specify the version that you want to roll back to. It must be the same version that you specified for the maintenance.forceRollbackVersion key in the central-config config map.

Verifying upgrades

The updated Sensors and Collectors continue to report the latest data from each secured cluster.

The last time Sensor contacted Central is visible in the RHACS portal.

Procedure
  1. On the RHACS portal, navigate to Platform ConfigurationSystem Health.

  2. Check to ensure that Sensor upgrade shows clusters up to date with Central.

Revoking the API token

For security reasons, Red Hat recommends that you revoke the API token that you have used to complete Central database backup.

Prerequisites
  • After the upgrade, you must reload the RHACS portal page and re-accept the certificate to continue using the RHACS portal.

Procedure
  1. On the RHACS portal, navigate to Platform ConfigurationIntegrations.

  2. Scroll down to the Authentication Tokens category, and click API Token.

  3. Select the checkbox in front of the token name that you want to revoke.

  4. Click Revoke.

  5. On the confirmation dialog box, click Confirm.