This is a cache of https://docs.openshift.com/acs/4.4/installing/installing_other/install-central-other.html. It is a snapshot of the page at 2024-11-26T17:59:51.561+0000.
Installing Central services for RHACS on other platforms - Installing RHACS on other platforms | Installing | Red Hat Advanced Cluster Security for Kubernetes 4.4
×

Central is the resource that contains the RHACS application management interface and services. It handles data persistence, API interactions, and RHACS portal access. You can use the same Central instance to secure multiple OpenShift Container Platform or Kubernetes clusters.

You can install Central by using one of the following methods:

  • Install using Helm charts

  • Install using the roxctl CLI (do not use this method unless you have a specific installation need that requires using it)

Install Central using Helm charts

You can install Central using Helm charts without any customization, using the default values, or by using Helm charts with additional customizations of configuration parameters.

Install Central using Helm charts without customization

You can install RHACS on your Red Hat OpenShift cluster without any customizations. You must add the Helm chart repository and install the central-services Helm chart to install the centralized components of Central and Scanner.

Adding the Helm chart repository

Procedure
  • Add the RHACS charts repository.

    $ helm repo add rhacs https://mirror.openshift.com/pub/rhacs/charts/

The Helm repository for Red Hat Advanced Cluster Security for Kubernetes includes Helm charts for installing different components, including:

  • Central services Helm chart (central-services) for installing the centralized components (Central and Scanner).

    You deploy centralized components only once and you can monitor multiple separate clusters by using the same installation.

  • Secured Cluster Services Helm chart (secured-cluster-services) for installing the per-cluster and per-node components (Sensor, Admission Controller, Collector, and Scanner-slim).

    Deploy the per-cluster components into each cluster that you want to monitor and deploy the per-node components in all nodes that you want to monitor.

Verification
  • Run the following command to verify the added chart repository:

    $ helm search repo -l rhacs/

Installing the central-services Helm chart without customizations

Use the following instructions to install the central-services Helm chart to deploy the centralized components (Central and Scanner).

Prerequisites
Procedure
  • Run the following command to install Central services and expose Central using a route:

    $ helm install -n stackrox \
      --create-namespace stackrox-central-services rhacs/central-services \
      --set imagePullSecrets.username=<username> \(1)
      --set imagePullSecrets.password=<password> \(2)
      --set central.exposure.route.enabled=true
    1 Include the user name for your pull secret for Red Hat Container Registry authentication.
    2 Include the password for your pull secret for Red Hat Container Registry authentication.
  • Or, run the following command to install Central services and expose Central using a load balancer:

    $ helm install -n stackrox \
      --create-namespace stackrox-central-services rhacs/central-services \
      --set imagePullSecrets.username=<username> \(1)
      --set imagePullSecrets.password=<password> \(2)
      --set central.exposure.loadBalancer.enabled=true
    1 Include the user name for your pull secret for Red Hat Container Registry authentication.
    2 Include the password for your pull secret for Red Hat Container Registry authentication.
  • Or, run the following command to install Central services and expose Central using port forward:

    $ helm install -n stackrox \
      --create-namespace stackrox-central-services rhacs/central-services \
      --set imagePullSecrets.username=<username> \(1)
      --set imagePullSecrets.password=<password>  (2)
    1 Include the user name for your pull secret for Red Hat Container Registry authentication.
    2 Include the password for your pull secret for Red Hat Container Registry authentication.
  • If you are installing Red Hat Advanced Cluster Security for Kubernetes in a cluster that requires a proxy to connect to external services, you must specify your proxy configuration by using the proxyConfig parameter. For example:

    env:
      proxyConfig: |
        url: http://proxy.name:port
        username: username
        password: password
        excludes:
        - some.domain
  • If you already created one or more image pull secrets in the namespace in which you are installing, instead of using a username and password, you can use --set imagePullSecrets.useExisting="<pull-secret-1;pull-secret-2>".

  • Do not use image pull secrets:

    • If you are pulling your images from quay.io/stackrox-io or a registry in a private network that does not require authentication. Use use --set imagePullSecrets.allowNone=true instead of specifying a username and password.

    • If you already configured image pull secrets in the default service account in the namespace you are installing. Use --set imagePullSecrets.useFromDefaultServiceAccount=true instead of specifying a username and password.

The output of the installation command includes:

  • An automatically generated administrator password.

  • Instructions on storing all the configuration values.

  • Any warnings that Helm generates.

Install Central using Helm charts with customizations

You can install RHACS on your Red Hat OpenShift cluster with customizations by using Helm chart configuration parameters with the helm install and helm upgrade commands. You can specify these parameters by using the --set option or by creating YAML configuration files.

Create the following files for configuring the Helm chart for installing Red Hat Advanced Cluster Security for Kubernetes:

  • Public configuration file values-public.yaml: Use this file to save all non-sensitive configuration options.

  • Private configuration file values-private.yaml: Use this file to save all sensitive configuration options. Ensure that you store this file securely.

  • Configuration file declarative-config-values.yaml: Create this file if you are using declarative configuration to add the declarative configuration mounts to Central.

Private configuration file

This section lists the configurable parameters of the values-private.yaml file. There are no default values for these parameters.

Image pull secrets

The credentials that are required for pulling images from the registry depend on the following factors:

  • If you are using a custom registry, you must specify these parameters:

    • imagePullSecrets.username

    • imagePullSecrets.password

    • image.registry

  • If you do not use a username and password to log in to the custom registry, you must specify one of the following parameters:

    • imagePullSecrets.allowNone

    • imagePullSecrets.useExisting

    • imagePullSecrets.useFromDefaultServiceAccount

Parameter Description

imagePullSecrets.username

The username of the account that is used to log in to the registry.

imagePullSecrets.password

The password of the account that is used to log in to the registry.

imagePullSecrets.allowNone

Use true if you are using a custom registry and it allows pulling images without credentials.

imagePullSecrets.useExisting

A comma-separated list of secrets as values. For example, secret1, secret2, secretN. Use this option if you have already created pre-existing image pull secrets with the given name in the target namespace.

imagePullSecrets.useFromDefaultServiceAccount

Use true if you have already configured the default service account in the target namespace with sufficiently scoped image pull secrets.

Proxy configuration

If you are installing Red Hat Advanced Cluster Security for Kubernetes in a cluster that requires a proxy to connect to external services, you must specify your proxy configuration by using the proxyConfig parameter. For example:

env:
  proxyConfig: |
    url: http://proxy.name:port
    username: username
    password: password
    excludes:
    - some.domain
Parameter Description

env.proxyConfig

Your proxy configuration.

Central

Configurable parameters for Central.

For a new installation, you can skip the following parameters:

  • central.jwtSigner.key

  • central.serviceTLS.cert

  • central.serviceTLS.key

  • central.adminPassword.value

  • central.adminPassword.htpasswd

  • central.db.serviceTLS.cert

  • central.db.serviceTLS.key

  • central.db.password.value

  • When you do not specify values for these parameters the Helm chart autogenerates values for them.

  • If you want to modify these values you can use the helm upgrade command and specify the values using the --set option.

For setting the administrator password, you can only use either central.adminPassword.value or central.adminPassword.htpasswd, but not both.

Parameter Description

central.jwtSigner.key

A private key which RHACS should use for signing JSON web tokens (JWTs) for authentication.

central.serviceTLS.cert

An internal certificate that the Central service should use for deploying Central.

central.serviceTLS.key

The private key of the internal certificate that the Central service should use.

central.defaultTLS.cert

The user-facing certificate that Central should use. RHACS uses this certificate for RHACS portal.

  • For a new installation, you must provide a certificate, otherwise, RHACS installs Central by using a self-signed certificate.

  • If you are upgrading, RHACS uses the existing certificate and its key.

central.defaultTLS.key

The private key of the user-facing certificate that Central should use.

  • For a new installation, you must provide the private key, otherwise, RHACS installs Central by using a self-signed certificate.

  • If you are upgrading, RHACS uses the existing certificate and its key.

central.db.password.value

Connection password for Central database.

central.adminPassword.value

Administrator password for logging into RHACS.

central.adminPassword.htpasswd

Administrator password for logging into RHACS. This password is stored in hashed format using bcrypt.

central.db.serviceTLS.cert

An internal certificate that the Central DB service should use for deploying Central DB.

central.db.serviceTLS.key

The private key of the internal certificate that the Central DB service should use.

central.db.password.value

The password used to connect to the Central DB.

If you are using central.adminPassword.htpasswd parameter, you must use a bcrypt encoded password hash. You can run the command htpasswd -nB admin to generate a password hash. For example,

htpasswd: |
  admin:<bcrypt-hash>
Scanner

Configurable parameters for the StackRox Scanner and Scanner V4 (Technology Preview).

Scanner V4 is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.

For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.

For a new installation, you can skip the following parameters and the Helm chart autogenerates values for them. Otherwise, if you are upgrading to a new version, specify the values for the following parameters:

  • scanner.dbPassword.value

  • scanner.serviceTLS.cert

  • scanner.serviceTLS.key

  • scanner.dbServiceTLS.cert

  • scanner.dbServiceTLS.key

  • scannerV4.db.password.value

  • scannerV4.indexer.serviceTLS.cert

  • scannerV4.indexer.serviceTLS.key

  • scannerV4.matcher.serviceTLS.cert

  • scannerV4.matcher.serviceTLS.key

  • scannerV4.db.serviceTLS.cert

  • scannerV4.db.serviceTLS.key

Parameter Description

scanner.dbPassword.value

The password to use for authentication with Scanner database. Do not modify this parameter because RHACS automatically creates and uses its value internally.

scanner.serviceTLS.cert

An internal certificate that the StackRox Scanner service should use for deploying the StackRox Scanner.

scanner.serviceTLS.key

The private key of the internal certificate that the Scanner service should use.

scanner.dbServiceTLS.cert

An internal certificate that the Scanner-db service should use for deploying Scanner database.

scanner.dbServiceTLS.key

The private key of the internal certificate that the Scanner-db service should use.

scannerV4.db.password.value

The password to use for authentication with the Scanner V4 database. Do not modify this parameter because RHACS automatically creates and uses its value internally.

scannerV4.db.serviceTLS.cert

An internal certificate that the Scanner V4 DB service should use for deploying the Scanner V4 database.

scannerV4.db.serviceTLS.key

The private key of the internal certificate that the Scanner V4 DB service should use.

scannerV4.indexer.serviceTLS.cert

An internal certificate that the Scanner V4 service should use for deploying the Scanner V4 Indexer.

scannerV4.indexer.serviceTLS.key

The private key of the internal certificate that the Scanner V4 Indexer should use.

scannerV4.matcher.serviceTLS.cert

An internal certificate that the Scanner V4 service should use for deploying the the Scanner V4 Matcher.

scannerV4.matcher.serviceTLS.key

The private key of the internal certificate that the Scanner V4 Matcher should use.

Public configuration file

This section lists the configurable parameters of the values-public.yaml file.

Image pull secrets

Image pull secrets are the credentials required for pulling images from your registry.

Parameter Description

imagePullSecrets.allowNone

Use true if you are using a custom registry and it allows pulling images without credentials.

imagePullSecrets.useExisting

A comma-separated list of secrets as values. For example, secret1, secret2. Use this option if you have already created pre-existing image pull secrets with the given name in the target namespace.

imagePullSecrets.useFromDefaultServiceAccount

Use true if you have already configured the default service account in the target namespace with sufficiently scoped image pull secrets.

Image

Image declares the configuration to set up the main registry, which the Helm chart uses to resolve images for the central.image, scanner.image, scanner.dbImage, scannerV4.image, and scannerV4.db.image parameters.

Parameter Description

image.registry

Address of your image registry. Either use a hostname, such as registry.redhat.io, or a remote registry hostname, such as us.gcr.io/stackrox-mirror.

Environment variables

Red Hat Advanced Cluster Security for Kubernetes automatically detects your cluster environment and sets values for env.openshift, env.istio, and env.platform. Only set these values to override the automatic cluster environment detection.

Parameter Description

env.openshift

Use true for installing on an OpenShift Container Platform cluster and overriding automatic cluster environment detection.

env.istio

Use true for installing on an Istio enabled cluster and overriding automatic cluster environment detection.

env.platform

The platform on which you are installing RHACS. Set its value to default or gke to specify cluster platform and override automatic cluster environment detection.

env.offlineMode

Use true to use RHACS in offline mode.

Additional trusted certificate authorities

The RHACS automatically references the system root certificates to trust. When Central, the StackRox Scanner, or Scanner V4 must reach out to services that use certificates issued by an authority in your organization or a globally trusted partner organization, you can add trust for these services by specifying the root certificate authority to trust by using the following parameter:

Parameter Description

additionalCAs.<certificate_name>

Specify the PEM encoded certificate of the root certificate authority to trust.

Central

Configurable parameters for Central.

  • You must specify a persistent storage option as either hostPath or persistentVolumeClaim.

  • For exposing Central deployment for external access. You must specify one parameter, either central.exposure.loadBalancer, central.exposure.nodePort, or central.exposure.route. When you do not specify any value for these parameters, you must manually expose Central or access it by using port-forwarding.

The following table includes settings for an external PostgreSQL database.

Parameter Description

central.declarativeConfiguration.mounts.configMaps

Mounts config maps used for declarative configurations.

Central.declarativeConfiguration.mounts.secrets

Mounts secrets used for declarative configurations.

central.endpointsConfig

The endpoint configuration options for Central.

central.nodeSelector

If the node selector selects tainted nodes, use this parameter to specify a taint toleration key, value, and effect for Central. This parameter is mainly used for infrastructure nodes.

central.tolerations

If the node selector selects tainted nodes, use this parameter to specify a taint toleration key, value, and effect for Central. This parameter is mainly used for infrastructure nodes.

central.exposeMonitoring

Specify true to expose Prometheus metrics endpoint for Central on port number 9090.

central.image.registry

A custom registry that overrides the global image.registry parameter for the Central image.

central.image.name

The custom image name that overrides the default Central image name (main).

central.image.tag

The custom image tag that overrides the default tag for Central image. If you specify your own image tag during a new installation, you must manually increment this tag when you to upgrade to a new version by running the helm upgrade command. If you mirror Central images in your own registry, do not modify the original image tags.

central.image.fullRef

Full reference including registry address, image name, and image tag for the Central image. Setting a value for this parameter overrides the central.image.registry, central.image.name, and central.image.tag parameters.

central.resources.requests.memory

The memory request for Central.

central.resources.requests.cpu

The CPU request for Central.

central.resources.limits.memory

The memory limit for Central.

central.resources.limits.cpu

The CPU limit for Central.

central.persistence.hostPath

The path on the node where RHACS should create a database volume. Red Hat does not recommend using this option.

central.persistence.persistentVolumeClaim.claimName

The name of the persistent volume claim (PVC) you are using.

central.persistence.persistentVolumeClaim.createClaim

Use true to create a new PVC, or false to use an existing claim.

central.persistence.persistentVolumeClaim.size

The size (in GiB) of the persistent volume managed by the specified claim.

central.exposure.loadBalancer.enabled

Use true to expose Central by using a load balancer.

central.exposure.loadBalancer.port

The port number on which to expose Central. The default port number is 443.

central.exposure.nodePort.enabled

Use true to expose Central by using the node port service.

central.exposure.nodePort.port

The port number on which to expose Central. When you skip this parameter, OpenShift Container Platform automatically assigns a port number. Red Hat recommends that you do not specify a port number if you are exposing RHACS by using a node port.

central.exposure.route.enabled

Use true to expose Central by using a route. This parameter is only available for OpenShift Container Platform clusters.

central.db.external

Use true to specify that Central DB should not be deployed and that an external database will be used.

central.db.source.connectionString

The connection string for Central to use to connect to the database. This is only used when central.db.external is set to true. The connection string must be in keyword/value format as described in the PostgreSQL documentation in "Additional resources".

  • Only PostgreSQL 13 is supported.

  • Connections through PgBouncer are not supported.

  • User must be superuser with ability to create and delete databases.

central.db.source.minConns

The minimum number of connections to the database to be established.

central.db.source.maxConns

The maximum number of connections to the database to be established.

central.db.source.statementTimeoutMs

The number of milliseconds a single query or transaction can be active against the database.

central.db.postgresConfig

The postgresql.conf to be used for Central DB as described in the PostgreSQL documentation in "Additional resources".

central.db.hbaConfig

The pg_hba.conf to be used for Central DB as described in the PostgreSQL documentation in "Additional resources".

central.db.nodeSelector

Specify a node selector label as label-key: label-value to force Central DB to only schedule on nodes with the specified label.

central.db.image.registry

A custom registry that overrides the global image.registry parameter for the Central DB image.

central.db.image.name

The custom image name that overrides the default Central DB image name (central-db).

central.db.image.tag

The custom image tag that overrides the default tag for Central DB image. If you specify your own image tag during a new installation, you must manually increment this tag when you to upgrade to a new version by running the helm upgrade command. If you mirror Central DB images in your own registry, do not modify the original image tags.

central.db.image.fullRef

Full reference including registry address, image name, and image tag for the Central DB image. Setting a value for this parameter overrides the central.db.image.registry, central.db.image.name, and central.db.image.tag parameters.

central.db.resources.requests.memory

The memory request for Central DB.

central.db.resources.requests.cpu

The CPU request for Central DB.

central.db.resources.limits.memory

The memory limit for Central DB.

central.db.resources.limits.cpu

The CPU limit for Central DB.

central.db.persistence.hostPath

The path on the node where RHACS should create a database volume. Red Hat does not recommend using this option.

central.db.persistence.persistentVolumeClaim.claimName

The name of the persistent volume claim (PVC) you are using.

central.db.persistence.persistentVolumeClaim.createClaim

Use true to create a new persistent volume claim, or false to use an existing claim.

central.db.persistence.persistentVolumeClaim.size

The size (in GiB) of the persistent volume managed by the specified claim.

StackRox Scanner

The following table lists the configurable parameters for the StackRox Scanner. This is the scanner used for node and platform scanning. If Scanner V4 is not enabled, the StackRox scanner also performs image scanning. Beginning with version 4.4, Scanner V4 can be enabled to provide image scanning. See the next table for Scanner V4 parameters.

Parameter Description

scanner.disable

Use true to install RHACS without the StackRox Scanner. When you use it with the helm upgrade command, Helm removes the existing StackRox Scanner deployment.

scanner.exposeMonitoring

Specify true to expose Prometheus metrics endpoint for the StackRox Scanner on port number 9090.

scanner.replicas

The number of replicas to create for the StackRox Scanner deployment. When you use it with the scanner.autoscaling parameter, this value sets the initial number of replicas.

scanner.logLevel

Configure the log level for the StackRox Scanner. Red Hat recommends that you not change the default log level value (INFO).

scanner.nodeSelector

Specify a node selector label as label-key: label-value to force the StackRox Scanner to only schedule on nodes with the specified label.

scanner.tolerations

If the node selector selects tainted nodes, use this parameter to specify a taint toleration key, value, and effect for the StackRox Scanner. This parameter is mainly used for infrastructure nodes.

scanner.autoscaling.disable

Use true to disable autoscaling for the StackRox Scanner deployment. When you disable autoscaling, the minReplicas and maxReplicas parameters do not have any effect.

scanner.autoscaling.minReplicas

The minimum number of replicas for autoscaling.

scanner.autoscaling.maxReplicas

The maximum number of replicas for autoscaling.

scanner.resources.requests.memory

The memory request for the StackRox Scanner.

scanner.resources.requests.cpu

The CPU request for the StackRox Scanner.

scanner.resources.limits.memory

The memory limit for the StackRox Scanner.

scanner.resources.limits.cpu

The CPU limit for the StackRox Scanner.

scanner.dbResources.requests.memory

The memory request for the StackRox Scanner database deployment.

scanner.dbResources.requests.cpu

The CPU request for the StackRox Scanner database deployment.

scanner.dbResources.limits.memory

The memory limit for the StackRox Scanner database deployment.

scanner.dbResources.limits.cpu

The CPU limit for the StackRox Scanner database deployment.

scanner.image.registry

A custom registry for the StackRox Scanner image.

scanner.image.name

The custom image name that overrides the default StackRox Scanner image name (scanner).

scanner.dbImage.registry

A custom registry for the StackRox Scanner DB image.

scanner.dbImage.name

The custom image name that overrides the default StackRox Scanner DB image name (scanner-db).

scanner.dbNodeSelector

Specify a node selector label as label-key: label-value to force the StackRox Scanner DB to only schedule on nodes with the specified label.

scanner.dbTolerations

If the node selector selects tainted nodes, use this parameter to specify a taint toleration key, value, and effect for the StackRox Scanner DB. This parameter is mainly used for infrastructure nodes.

Scanner V4

The following table lists the configurable parameters for Scanner V4.

Scanner V4 is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.

For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.

Parameter Description

scannerV4.db.persistence.persistentVolumeClaim.claimName

The name of the PVC to manage persistent data for Scanner V4. If no PVC with the given name exists, it is created. The default value is scanner-v4-db if not set. To prevent data loss, the PVC is not removed automatically when Central is deleted.

scannerV4.disable

Use false to enable Scanner V4. When setting this parameter, the StackRox Scanner must also be enabled by setting scanner.disable=false. Until feature parity between the StackRox Scanner and Scanner V4 is reached, Scanner V4 can only be used in combination with the StackRox Scanner. Enabling Scanner V4 without also enabling the StackRox Scanner is not supported. When you set this parameter to true with the helm upgrade command, Helm removes the existing Scanner V4 deployment.

scannerV4.exposeMonitoring

Specify true to expose Prometheus metrics endpoint for Scanner V4 on port number 9090.

scannerV4.indexer.replicas

The number of replicas to create for the Scanner V4 Indexer deployment. When you use it with the scannerV4.indexer.autoscaling parameter, this value sets the initial number of replicas.

scannerV4.indexer.logLevel

Configure the log level for the Scanner V4 Indexer. Red Hat recommends that you not change the default log level value (INFO).

scannerV4.indexer.nodeSelector

Specify a node selector label as label-key: label-value to force the Scanner V4 Indexer to only schedule on nodes with the specified label.

scannerV4.indexer.tolerations

If the node selector selects tainted nodes, use this parameter to specify a taint toleration key, value, and effect for the Scanner V4 Indexer. This parameter is mainly used for infrastructure nodes.

scannerV4.indexer.autoscaling.disable

Use true to disable autoscaling for the Scanner V4 Indexer deployment. When you disable autoscaling, the minReplicas and maxReplicas parameters do not have any effect.

scannerV4.indexer.autoscaling.minReplicas

The minimum number of replicas for autoscaling.

scannerV4.indexer.autoscaling.maxReplicas

The maximum number of replicas for autoscaling.

scannerV4.indexer.resources.requests.memory

The memory request for the Scanner V4 Indexer.

scannerV4.indexer.resources.requests.cpu

The CPU request for the Scanner V4 Indexer.

scannerV4.indexer.resources.limits.memory

The memory limit for the Scanner V4 Indexer.

scannerV4.indexer.resources.limits.cpu

The CPU limit for the Scanner V4 Indexer.

scannerV4.matcher.replicas

The number of replicas to create for the Scanner V4 Matcher deployment. When you use it with the scannerV4.matcher.autoscaling parameter, this value sets the initial number of replicas.

scannerV4.matcher.logLevel

Red Hat recommends that you not change the default log level value (INFO).

scannerV4.matcher.nodeSelector

Specify a node selector label as label-key: label-value to force the Scanner V4 Matcher to only schedule on nodes with the specified label.

scannerV4.matcher.tolerations

If the node selector selects tainted nodes, use this parameter to specify a taint toleration key, value, and effect for the Scanner V4 Matcher. This parameter is mainly used for infrastructure nodes.

scannerV4.matcher.autoscaling.disable

Use true to disable autoscaling for the Scanner V4 Matcher deployment. When you disable autoscaling, the minReplicas and maxReplicas parameters do not have any effect.

scannerV4.matcher.autoscaling.minReplicas

The minimum number of replicas for autoscaling.

scannerV4.matcher.autoscaling.maxReplicas

The maximum number of replicas for autoscaling.

scannerV4.matcher.resources.requests.memory

The memory request for the Scanner V4 Matcher.

scannerV4.matcher.resources.requests.cpu

The CPU request for the Scanner V4 Matcher.

scannerV4.db.resources.requests.memory

The memory request for the Scanner V4 database deployment.

scannerV4.db.resources.requests.cpu

The CPU request for the Scanner V4 database deployment.

scannerV4.db.resources.limits.memory

The memory limit for the Scanner V4 database deployment.

scannerV4.db.resources.limits.cpu

The CPU limit for the Scanner V4 database deployment.

scannerV4.db.nodeSelector

Specify a node selector label as label-key: label-value to force the Scanner V4 DB to only schedule on nodes with the specified label.

scannerV4.db.tolerations

If the node selector selects tainted nodes, use this parameter to specify a taint toleration key, value, and effect for the Scanner V4 DB. This parameter is mainly used for infrastructure nodes.

scannerV4.db.image.registry

A custom registry for the Scanner V4 DB image.

scannerV4.db.image.name

The custom image name that overrides the default Scanner V4 DB image name (scanner-v4-db).

scannerV4.image.registry

A custom registry for the Scanner V4 image.

scannerV4.image.name

The custom image name that overrides the default Scanner V4 image name (scanner-v4).

Customization

Use these parameters to specify additional attributes for all objects that RHACS creates.

Parameter Description

customize.labels

A custom label to attach to all objects.

customize.annotations

A custom annotation to attach to all objects.

customize.podLabels

A custom label to attach to all deployments.

customize.podAnnotations

A custom annotation to attach to all deployments.

customize.envVars

A custom environment variable for all containers in all objects.

customize.central.labels

A custom label to attach to all objects that Central creates.

customize.central.annotations

A custom annotation to attach to all objects that Central creates.

customize.central.podLabels

A custom label to attach to all Central deployments.

customize.central.podAnnotations

A custom annotation to attach to all Central deployments.

customize.central.envVars

A custom environment variable for all Central containers.

customize.scanner.labels

A custom label to attach to all objects that Scanner creates.

customize.scanner.annotations

A custom annotation to attach to all objects that Scanner creates.

customize.scanner.podLabels

A custom label to attach to all Scanner deployments.

customize.scanner.podAnnotations

A custom annotation to attach to all Scanner deployments.

customize.scanner.envVars

A custom environment variable for all Scanner containers.

customize.scanner-db.labels

A custom label to attach to all objects that Scanner DB creates.

customize.scanner-db.annotations

A custom annotation to attach to all objects that Scanner DB creates.

customize.scanner-db.podLabels

A custom label to attach to all Scanner DB deployments.

customize.scanner-db.podAnnotations

A custom annotation to attach to all Scanner DB deployments.

customize.scanner-db.envVars

A custom environment variable for all Scanner DB containers.

customize.scanner-v4-indexer.labels

A custom label to attach to all objects that Scanner V4 Indexer creates and into the pods belonging to them.

customize.scanner-v4-indexer.annotations

A custom annotation to attach to all objects that Scanner V4 Indexer creates and into the pods belonging to them.

customize.scanner-v4-indexer.podLabels

A custom label to attach to all objects that Scanner V4 Indexer creates and into the pods belonging to them.

customize.scanner-v4-indexer.podAnnotations

A custom annotation to attach to all objects that Scanner V4 Indexer creates and into the pods belonging to them.

customize.scanner-4v-indexer.envVars

A custom environment variable for all Scanner V4 Indexer containers and the pods belonging to them.

customize.scanner-v4-matcher.labels

A custom label to attach to all objects that Scanner V4 Matcher creates and into the pods belonging to them.

customize.scanner-v4-matcher.annotations

A custom annotation to attach to all objects that Scanner V4 Matcher creates and into the pods belonging to them.

customize.scanner-v4-matcher.podLabels

A custom label to attach to all objects that Scanner V4 Matcher creates and into the pods belonging to them.

customize.scanner-v4-matcher.podAnnotations

A custom annotation to attach to all objects that Scanner V4 Matcher creates and into the pods belonging to them.

customize.scanner-4v-matcher.envVars

A custom environment variable for all Scanner V4 Matcher containers and the pods belonging to them.

customize.scanner-v4-db.labels

A custom label to attach to all objects that Scanner V4 DB creates and into the pods belonging to them.

customize.scanner-v4-db.annotations

A custom annotation to attach to all objects that Scanner V4 DB creates and into the pods belonging to them.

customize.scanner-v4-db.podLabels

A custom label to attach to all objects that Scanner V4 DB creates and into the pods belonging to them.

customize.scanner-v4-db.podAnnotations

A custom annotation to attach to all objects that Scanner V4 DB creates and into the pods belonging to them.

customize.scanner-4v-db.envVars

A custom environment variable for all Scanner V4 DB containers and the pods belonging to them.

You can also use:

  • the customize.other.service/*.labels and the customize.other.service/*.annotations parameters, to specify labels and annotations for all objects.

  • or, provide a specific service name, for example, customize.other.service/central-loadbalancer.labels and customize.other.service/central-loadbalancer.annotations as parameters and set their value.

Advanced customization

The parameters specified in this section are for information only. Red Hat does not support RHACS instances with modified namespace and release names.

Parameter Description

allowNonstandardNamespace

Use true to deploy RHACS into a namespace other than the default namespace stackrox.

allowNonstandardReleaseName

Use true to deploy RHACS with a release name other than the default stackrox-central-services.

Declarative configuration values

To use declarative configuration, you must create a YAML file (in this example, named "declarative-config-values.yaml") that adds the declarative configuration mounts to Central. This file is used in a Helm installation.

Procedure
  1. Create the YAML file (in this example, named declarative-config-values.yaml) using the following example as a guideline:

    central:
      declarativeConfiguration:
        mounts:
          configMaps:
            - declarative-configs
          secrets:
            - sensitive-declarative-configs
  2. Install the Central services Helm chart as documented in the "Installing the central-services Helm chart", referencing the declarative-config-values.yaml file.

Installing the central-services Helm chart

After you configure the values-public.yaml and values-private.yaml files, install the central-services Helm chart to deploy the centralized components (Central and Scanner).

Procedure
  • Run the following command:

    $ helm install -n stackrox --create-namespace \
      stackrox-central-services rhacs/central-services \
      -f <path_to_values_public.yaml> -f <path_to_values_private.yaml> (1)
    1 Use the -f option to specify the paths for your YAML configuration files.

Optional: If using declarative configuration, add -f <path_to_declarative-config-values.yaml to this command to mount the declarative configurations file in Central.

Changing configuration options after deploying the central-services Helm chart

You can make changes to any configuration options after you have deployed the central-services Helm chart.

When using the helm upgrade command to make changes, the following guidelines and requirements apply:

  • You can also specify configuration values using the --set or --set-file parameters. However, these options are not saved, and you must manually specify all the options again whenever you make changes.

  • Some changes, such as enabling a new component like Scanner V4, require new certificates to be issued for the component. Therefore, you must provide a CA when making these changes.

    Scanner V4 is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.

    For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.

    • If the CA was generated by the Helm chart during the initial installation, you must retrieve these automatically generated values from the cluster and provide them to the helm upgrade command. The post-installation notes of the central-services Helm chart include a command for retrieving the automatically generated values.

    • If the CA was generated outside of the Helm chart and provided during the installation of the central-services chart, then you must perform that action again when using the helm upgrade command, for example, by using the --reuse-values flag with the helm upgrade command.

Procedure
  1. Update the values-public.yaml and values-private.yaml configuration files with new values.

  2. Run the helm upgrade command and specify the configuration files using the -f option:

    $ helm upgrade -n stackrox \
      stackrox-central-services rhacs/central-services \
      --reuse-values \(1)
      -f <path_to_init_bundle_file \
      -f <path_to_values_public.yaml> \
      -f <path_to_values_private.yaml>
    1 If you have modified values that are not included in the values_public.yaml and values_private.yaml files, include the --reuse-values parameter.

Install Central using the roxctl CLI

For production environments, Red Hat recommends using the Operator or Helm charts to install RHACS. Do not use the roxctl install method unless you have a specific installation need that requires using this method.

Installing the roxctl CLI

To install Red Hat Advanced Cluster Security for Kubernetes you must install the roxctl CLI by downloading the binary. You can install roxctl on Linux, Windows, or macOS.

Installing the roxctl CLI on Linux

You can install the roxctl CLI binary on Linux by using the following procedure.

roxctl CLI for Linux is available for amd64, ppc64le, and s390x architectures.

Procedure
  1. Determine the roxctl architecture for the target operating system:

    $ arch="$(uname -m | sed "s/x86_64//")"; arch="${arch:+-$arch}"
  2. Download the roxctl CLI:

    $ curl -L -f -o roxctl "https://mirror.openshift.com/pub/rhacs/assets/4.4.5/bin/Linux/roxctl${arch}"
  3. Make the roxctl binary executable:

    $ chmod +x roxctl
  4. Place the roxctl binary in a directory that is on your PATH:

    To check your PATH, execute the following command:

    $ echo $PATH
Verification
  • Verify the roxctl version you have installed:

    $ roxctl version

Installing the roxctl CLI on macOS

You can install the roxctl CLI binary on macOS by using the following procedure.

roxctl CLI for macOS is available for the amd64 architecture.

Procedure
  1. Download the roxctl CLI:

    $ curl -L -f -o roxctl "https://mirror.openshift.com/pub/rhacs/assets/4.4.5/bin/Darwin/roxctl${arch}"
  2. Remove all extended attributes from the binary:

    $ xattr -c roxctl
  3. Make the roxctl binary executable:

    $ chmod +x roxctl
  4. Place the roxctl binary in a directory that is on your PATH:

    To check your PATH, execute the following command:

    $ echo $PATH
Verification
  • Verify the roxctl version you have installed:

    $ roxctl version

Installing the roxctl CLI on Windows

You can install the roxctl CLI binary on Windows by using the following procedure.

roxctl CLI for Windows is available for the amd64 architecture.

Procedure
  • Download the roxctl CLI:

    $ curl -f -O https://mirror.openshift.com/pub/rhacs/assets/4.4.5/bin/Windows/roxctl.exe
Verification
  • Verify the roxctl version you have installed:

    $ roxctl version

Using the interactive installer

Use the interactive installer to generate the required secrets, deployment configurations, and deployment scripts for your environment.

Procedure
  1. Run the interactive install command:

    $ roxctl central generate interactive

    Installing RHACS using the roxctl CLI creates PodSecurityPolicy (PSP) objects by default for backward compatibility. If you install RHACS on Kubernetes versions 1.25 and newer or OpenShift Container Platform version 4.12 and newer, you must disable the PSP object creation. To do this, specify --enable-pod-security-policies option as false for the roxctl central generate and roxctl sensor generate commands.

  2. Press Enter to accept the default value for a prompt or enter custom values as required. The following example shows the interactive installer prompts:

    Enter path to the backup bundle from which to restore keys and certificates (optional):
    Enter read templates from local filesystem (default: "false"):
    Enter path to helm templates on your local filesystem (default: "/path"):
    Enter PEM cert bundle file (optional): (1)
    Enter Create PodSecurityPolicy resources (for pre-v1.25 Kubernetes) (default: "true"): (2)
    Enter administrator password (default: autogenerated):
    Enter orchestrator (k8s, openshift):
    Enter default container images settings (development_build, stackrox.io, rhacs, opensource); it controls repositories from where to download the images, image names and tags format (default: "development_build"):
    Enter the directory to output the deployment bundle to (default: "central-bundle"):
    Enter the OpenShift major version (3 or 4) to deploy on (default: "0"):
    Enter whether to enable telemetry (default: "false"):
    Enter central-db image to use (if unset, a default will be used according to --image-defaults):
    Enter Istio version when deploying into an Istio-enabled cluster (leave empty when not running Istio) (optional):
    Enter the method of exposing Central (route, lb, np, none) (default: "none"): (3)
    Enter main image to use (if unset, a default will be used according to --image-defaults):
    Enter whether to run StackRox in offline mode, which avoids reaching out to the Internet (default: "false"):
    Enter list of secrets to add as declarative configuration mounts in central (default: "[]"): (4)
    Enter list of config maps to add as declarative configuration mounts in central (default: "[]"): (5)
    Enter the deployment tool to use (kubectl, helm, helm-values) (default: "kubectl"):
    Enter scanner-db image to use (if unset, a default will be used according to --image-defaults):
    Enter scanner image to use (if unset, a default will be used according to --image-defaults):
    Enter Central volume type (hostpath, pvc): (6)
    Enter external volume name for Central (default: "stackrox-db"):
    Enter external volume size in Gi for Central (default: "100"):
    Enter storage class name for Central (optional if you have a default StorageClass configured):
    Enter external volume name for Central DB (default: "central-db"):
    Enter external volume size in Gi for Central DB (default: "100"):
    Enter storage class name for Central DB (optional if you have a default StorageClass configured):
    1 If you want to add a custom TLS certificate, provide the file path for the PEM-encoded certificate. When you specify a custom certificate the interactive installer also prompts you to provide a PEM private key for the custom certificate you are using.
    2 If you are running Kubernetes version 1.25 or later, set this value to false.
    3 To use the RHACS portal, you must expose Central by using a route, a load balancer or a node port.
    4 For more information on using declarative configurations for authentication and authorization, see "Declarative configuration for authentication and authorization resources" in "Managing RBAC in Red Hat Advanced Cluster Security for Kubernetes".
    5 For more information on using declarative configurations for authentication and authorization, see "Declarative configuration for authentication and authorization resources" in "Managing RBAC in Red Hat Advanced Cluster Security for Kubernetes".
    6 If you plan to install Red Hat Advanced Cluster Security for Kubernetes on OpenShift Container Platform with a hostPath volume, you must modify the SELinux policy.

    On OpenShift Container Platform, for using a hostPath volume, you must modify the SELinux policy to allow access to the directory, which the host and the container share. It is because SELinux blocks directory sharing by default. To modify the SELinux policy, run the following command:

    $ sudo chcon -Rt svirt_sandbox_file_t <full_volume_path>

    However, Red Hat does not recommend modifying the SELinux policy, instead use PVC when installing on OpenShift Container Platform.

On completion, the installer creates a folder named central-bundle, which contains the necessary YAML manifests and scripts to deploy Central. In addition, it shows on-screen instructions for the scripts you need to run to deploy additional trusted certificate authorities, Central and Scanner, and the authentication instructions for logging into the RHACS portal along with the autogenerated password if you did not provide one when answering the prompts.

Running the Central installation scripts

After you run the interactive installer, you can run the setup.sh script to install Central.

Procedure
  1. Run the setup.sh script to configure image registry access:

    $ ./central-bundle/central/scripts/setup.sh
  2. Create the necessary resources:

    $ oc create -R -f central-bundle/central
  3. Check the deployment progress:

    $ oc get pod -n stackrox -w
  4. After Central is running, find the RHACS portal IP address and open it in your browser. Depending on the exposure method you selected when answering the prompts, use one of the following methods to get the IP address.

    Exposure method Command Address Example

    Route

    oc -n stackrox get route central

    The address under the HOST/PORT column in the output

    https://central-stackrox.example.route

    Node Port

    oc get node -owide && oc -n stackrox get svc central-loadbalancer

    IP or hostname of any node, on the port shown for the service

    https://198.51.100.0:31489

    Load Balancer

    oc -n stackrox get svc central-loadbalancer

    EXTERNAL-IP or hostname shown for the service, on port 443

    https://192.0.2.0

    None

    central-bundle/central/scripts/port-forward.sh 8443

    https://localhost:8443

    https://localhost:8443

If you have selected autogenerated password during the interactive install, you can run the following command to see it for logging into Central:

$ cat central-bundle/password