$ helm repo add rhacs https://mirror.openshift.com/pub/rhacs/charts/
You can install RHACS Cloud Service on your secured clusters by using the the Operator or Helm charts. You can also use the roxctl
cli to install it, but do not use this method unless you have a specific installation need that requires using it.
You have created your Red Hat OpenShift cluster and installed the Operator on it.
In the ACS Console in RHACS Cloud Service, you have created and downloaded the init bundle.
You applied the init bundle by using the oc create
command.
During installation, you noted the Central API Endpoint, including the address and the port number. You can view this information by choosing Advanced Cluster Security → ACS Instances from the cloud console navigation menu, and then clicking the ACS instance you created.
You can install secured cluster services on your clusters by using the SecuredCluster
custom resource. You must install the secured cluster services on every cluster in your environment that you want to monitor.
When you install secured cluster services, Collector is also installed. To install Collector on systems that have Unified Extensible Firmware Interface (UEFI) and that have Secure Boot enabled, you must use eBPF probes because kernel modules are unsigned, and the UEFI firmware cannot load unsigned packages. Collector identifies Secure Boot status at the start and switches to eBPF probes if required. |
If you are using OpenShift Container Platform, you must install version 4.11 or later.
You have installed the RHACS Operator.
You have generated an init bundle and applied it to the cluster.
On the OpenShift Container Platform web console, navigate to the Operators → Installed Operators page.
click the RHACS Operator.
click Secured Cluster from the central navigation menu in the Operator details page.
click Create SecuredCluster.
Select one of the following options in the Configure via field:
Form view: Use this option if you want to use the on-screen fields to configure the secured cluster and do not need to change any other fields.
YAML view: Use this view to set up the secured cluster using the YAML file. The YAML file is displayed in the window and you can edit fields in it. If you select this option, when you are finished editing the file, click Create.
If you are using Form view, enter the new project name by accepting or editing the default name. The default value is stackrox-secured-cluster-services.
Optional: Add any labels for the cluster.
Enter a unique name for your SecuredCluster
custom resource.
For Central Endpoint, enter the address and port number of your Central instance. For example, if Central is available at https://central.example.com
, then specify the central endpoint as central.example.com:443
. The default value of central.stackrox.svc:443
only works when you install secured cluster services and Central in the same cluster. Do not use the default value when you are configuring multiple clusters. Instead, use the hostname when configuring the Central Endpoint value for each cluster.
For RHACS Cloud Service use the Central API Endpoint, including the address and the port number. You can view this information by choosing Advanced Cluster Security → ACS Instances from the cloud console navigation menu, then clicking the ACS instance you created.
Only if you are installing secured cluster services and Central in the same cluster, use central.stackrox.svc:443
.
Accept the default values or configure custom values if needed. For example, you may need to configure TLS if you are using custom certificates or untrusted CAs.
click Create.
Optional: Configure additional secured cluster settings.
Verify installation.
You can install RHACS on secured clusters by using Helm charts with no customization, using the default values, or with customizations of configuration parameters.
First, ensure that you add the Helm chart repository.
Add the RHACS charts repository.
$ helm repo add rhacs https://mirror.openshift.com/pub/rhacs/charts/
The Helm repository for Red Hat Advanced Cluster Security for Kubernetes includes Helm charts for installing different components, including:
Secured Cluster Services Helm chart (secured-cluster-services
) for installing the per-cluster and per-node components (Sensor, Admission Controller, Collector, and Scanner-slim).
Deploy the per-cluster components into each cluster that you want to monitor and deploy the per-node components in all nodes that you want to monitor. |
Run the following command to verify the added chart repository:
$ helm search repo -l rhacs/
Use the following instructions to install the secured-cluster-services
Helm chart to deploy the per-cluster and per-node components (Sensor, Admission controller, Collector, and Scanner-slim).
To install Collector on systems that have Unified Extensible Firmware Interface (UEFI) and that have Secure Boot enabled, you must use eBPF probes because kernel modules are unsigned, and the UEFI firmware cannot load unsigned packages. Collector identifies Secure Boot status at the start and switches to eBPF probes if required. |
You must have generated an RHACS init bundle for your cluster.
You must have access to the Red Hat Container Registry and a pull secret for authentication. For information about downloading images from registry.redhat.io
, see Red Hat Container Registry Authentication.
You must have the Central API Endpoint, including the address and the port number. You can view this information by choosing Advanced Cluster Security → ACS Instances from the cloud console navigation menu, then clicking the ACS instance you created.
Run the following command on your Kubernetes based clusters:
$ helm install -n stackrox --create-namespace \
stackrox-secured-cluster-services rhacs/secured-cluster-services \
-f <path_to_cluster_init_bundle.yaml> \ (1)
-f <path_to_pull_secret.yaml> \ (2)
--set clusterName=<name_of_the_secured_cluster> \
--set centralEndpoint=<endpoint_of_central_service> (3)
--set imagePullSecrets.username=<your redhat.com username> \(4)
--set imagePullSecrets.password=<your redhat.com password>(5)
1 | Use the -f option to specify the path for the init bundle. |
2 | Use the -f option to specify the path for the pull secret for Red Hat Container Registry authentication. |
3 | Enter the Central API Endpoint, including the address and the port number. You can view this information again in the Red Hat Hybrid Cloud Console console by choosing Advanced Cluster Security → ACS Instances, and then clicking the ACS instance you created. |
4 | Include the user name for your pull secret for Red Hat Container Registry authentication. |
5 | Include the password for your pull secret for Red Hat Container Registry authentication. |
You can use Helm chart configuration parameters with the helm install
and helm upgrade
commands.
Specify these parameters by using the --set
option or by creating YAML configuration files.
Create the following files for configuring the Helm chart for installing Red Hat Advanced Cluster Security for Kubernetes:
Public configuration file values-public.yaml
: Use this file to save all non-sensitive configuration options.
Private configuration file values-private.yaml
: Use this file to save all sensitive configuration options. Ensure that you store this file securely.
When using the |
Parameter | Description |
---|---|
|
Name of your cluster. |
|
Address, including port number, of the Central endpoint. If you are using a non-gRPC capable load balancer, use the WebSocket protocol by prefixing the endpoint address with |
|
Address of the Sensor endpoint including port number. |
|
Image pull policy for the Sensor container. |
|
The internal service-to-service TLS certificate that Sensor uses. |
|
The internal service-to-service TLS certificate key that Sensor uses. |
|
The memory request for the Sensor container. Use this parameter to override the default value. |
|
The CPU request for the Sensor container. Use this parameter to override the default value. |
|
The memory limit for the Sensor container. Use this parameter to override the default value. |
|
The CPU limit for the Sensor container. Use this parameter to override the default value. |
|
Specify a node selector label as |
|
If the node selector selects tainted nodes, use this parameter to specify a taint toleration key, value, and effect for Sensor. This parameter is mainly used for infrastructure nodes. |
|
The name of the |
|
The name of the Collector image. |
|
Address of the registry you are using for the main image. |
|
Address of the registry you are using for the Collector image. |
|
Image pull policy for |
|
Image pull policy for the Collector images. |
|
Tag of |
|
Tag of |
|
Either |
|
Image pull policy for the Collector container. |
|
Image pull policy for the Compliance container. |
|
If you specify |
|
The memory request for the Collector container. Use this parameter to override the default value. |
|
The CPU request for the Collector container. Use this parameter to override the default value. |
|
The memory limit for the Collector container. Use this parameter to override the default value. |
|
The CPU limit for the Collector container. Use this parameter to override the default value. |
|
The memory request for the Compliance container. Use this parameter to override the default value. |
|
The CPU request for the Compliance container. Use this parameter to override the default value. |
|
The memory limit for the Compliance container. Use this parameter to override the default value. |
|
The CPU limit for the Compliance container. Use this parameter to override the default value. |
|
The internal service-to-service TLS certificate that Collector uses. |
|
The internal service-to-service TLS certificate key that Collector uses. |
|
This setting controls whether Kubernetes is configured to contact Red Hat Advanced Cluster Security for Kubernetes with |
|
When you set this parameter as |
|
This setting controls whether the cluster is configured to contact Red Hat Advanced Cluster Security for Kubernetes with |
|
This setting controls whether Red Hat Advanced Cluster Security for Kubernetes evaluates policies; if it is disabled, all AdmissionReview requests are automatically accepted. |
|
This setting controls the behavior of the admission control service.
You must specify |
|
If you set this option to |
|
Set it to |
|
The maximum time, in seconds, Red Hat Advanced Cluster Security for Kubernetes should wait while evaluating admission review requests. Use this to set request timeouts when you enable image scanning. If the image scan runs longer than the specified time, Red Hat Advanced Cluster Security for Kubernetes accepts the request. |
|
The memory request for the Admission Control container. Use this parameter to override the default value. |
|
The CPU request for the Admission Control container. Use this parameter to override the default value. |
|
The memory limit for the Admission Control container. Use this parameter to override the default value. |
|
The CPU limit for the Admission Control container. Use this parameter to override the default value. |
|
Specify a node selector label as |
|
If the node selector selects tainted nodes, use this parameter to specify a taint toleration key, value, and effect for Admission Control. This parameter is mainly used for infrastructure nodes. |
|
The internal service-to-service TLS certificate that Admission Control uses. |
|
The internal service-to-service TLS certificate key that Admission Control uses. |
|
Use this parameter to override the default |
|
If you specify |
|
Specify |
|
Specify |
|
Specify |
|
Resource specification for Sensor. |
|
Resource specification for Admission controller. |
|
Resource specification for Collector. |
|
Resource specification for Collector’s Compliance container. |
|
If you set this option to |
|
If you set this option to |
|
If you set this option to |
|
If the node selector selects tainted nodes, use this parameter to specify a taint toleration key, value, and effect for Scanner DB. |
|
Resource specification for Collector’s Compliance container. |
|
Setting this parameter allows you to modify the scanner log level. Use this option only for troubleshooting purposes. |
|
If you set this option to |
|
The minimum number of replicas for autoscaling. Defaults to 2. |
|
The maximum number of replicas for autoscaling. Defaults to 5. |
|
Specify a node selector label as |
|
If the node selector selects tainted nodes, use this parameter to specify a taint toleration key, value, and effect for Scanner. |
|
Specify a node selector label as |
|
If the node selector selects tainted nodes, use this parameter to specify a taint toleration key, value, and effect for Scanner DB. |
|
The memory request for the Scanner container. Use this parameter to override the default value. |
|
The CPU request for the Scanner container. Use this parameter to override the default value. |
|
The memory limit for the Scanner container. Use this parameter to override the default value. |
|
The CPU limit for the Scanner container. Use this parameter to override the default value. |
|
The memory request for the Scanner DB container. Use this parameter to override the default value. |
|
The CPU request for the Scanner DB container. Use this parameter to override the default value. |
|
The memory limit for the Scanner DB container. Use this parameter to override the default value. |
|
The CPU limit for the Scanner DB container. Use this parameter to override the default value. |
|
If you set this option to |
You can specify environment variables for Sensor and Admission controller in the following format:
customize:
envVars:
ENV_VAR1: "value1"
ENV_VAR2: "value2"
The customize
setting allows you to specify custom Kubernetes metadata (labels and annotations) for all objects created by this Helm chart and additional pod labels, pod annotations, and container environment variables for workloads.
The configuration is hierarchical, in the sense that metadata defined at a more generic scope (for example, for all objects) can be overridden by metadata defined at a narrower scope (for example, only for the Sensor deployment).
After you configure the values-public.yaml
and values-private.yaml
files, install the secured-cluster-services
Helm chart to deploy the per-cluster and per-node components (Sensor, Admission controller, Collector, and Scanner-slim).
To install Collector on systems that have Unified Extensible Firmware Interface (UEFI) and that have Secure Boot enabled, you must use eBPF probes because kernel modules are unsigned, and the UEFI firmware cannot load unsigned packages. Collector identifies Secure Boot status at the start and switches to eBPF probes if required. |
You must have generated an RHACS init bundle for your cluster.
You must have access to the Red Hat Container Registry and a pull secret for authentication. For information about downloading images from registry.redhat.io
, see Red Hat Container Registry Authentication.
You must have the Central API Endpoint, including the address and the port number. You can view this information by choosing Advanced Cluster Security → ACS Instances from the cloud console navigation menu, then clicking the ACS instance you created.
Run the following command:
$ helm install -n stackrox \
--create-namespace stackrox-secured-cluster-services rhacs/secured-cluster-services \
-f <name_of_cluster_init_bundle.yaml> \
-f <path_to_values_public.yaml> -f <path_to_values_private.yaml> \ (1)
--set imagePullSecrets.username=<username> \ (2)
--set imagePullSecrets.password=<password> (3)
1 | Use the -f option to specify the paths for your YAML configuration files. |
2 | Include the user name for your pull secret for Red Hat Container Registry authentication. |
3 | Include the password for your pull secret for Red Hat Container Registry authentication. |
To deploy
|
You can make changes to any configuration options after you have deployed the secured-cluster-services
Helm chart.
Update the values-public.yaml
and values-private.yaml
configuration files with new values.
Run the helm upgrade
command and specify the configuration files using the -f
option:
$ helm upgrade -n stackrox \
stackrox-secured-cluster-services rhacs/secured-cluster-services \
--reuse-values \ (1)
-f <path_to_values_public.yaml> \
-f <path_to_values_private.yaml>
1 | You must specify the --reuse-values parameter, otherwise the Helm upgrade command resets all previously configured settings. |
You can also specify configuration values using the |
To install RHACS on secured clusters by using the cli, perform the following steps:
Install the roxctl
cli.
Install Sensor.
You must first download the binary. You can install roxctl
on Linux, Windows, or macOS.
You can install the roxctl
cli binary on Linux by using the following procedure.
|
Determine the roxctl
architecture for the target operating system:
$ arch="$(uname -m | sed "s/x86_64//")"; arch="${arch:+-$arch}"
Download the roxctl
cli:
$ curl -f -o roxctl "https://mirror.openshift.com/pub/rhacs/assets/4.3.8/bin/Linux/roxctl${arch}"
Make the roxctl
binary executable:
$ chmod +x roxctl
Place the roxctl
binary in a directory that is on your PATH
:
To check your PATH
, execute the following command:
$ echo $PATH
Verify the roxctl
version you have installed:
$ roxctl version
You can install the roxctl
cli binary on macOS by using the following procedure.
|
Download the roxctl
cli:
$ curl -f -O https://mirror.openshift.com/pub/rhacs/assets/4.3.8/bin/Darwin/roxctl
Remove all extended attributes from the binary:
$ xattr -c roxctl
Make the roxctl
binary executable:
$ chmod +x roxctl
Place the roxctl
binary in a directory that is on your PATH
:
To check your PATH
, execute the following command:
$ echo $PATH
Verify the roxctl
version you have installed:
$ roxctl version
You can install the roxctl
cli binary on Windows by using the following procedure.
|
Download the roxctl
cli:
$ curl -f -O https://mirror.openshift.com/pub/rhacs/assets/4.3.8/bin/Windows/roxctl.exe
Verify the roxctl
version you have installed:
$ roxctl version
To monitor a cluster, you must deploy Sensor. You must deploy Sensor into each cluster that you want to monitor. The following steps describe adding Sensor by using the RHACS portal.
You must have already installed Central services, or you can access Central services by selecting your ACS instance on Red Hat Advanced Cluster Security Cloud Service (RHACS Cloud Service).
On your secured cluster, in the RHACS portal, navigate to Platform Configuration → Clusters.
Select + New Cluster.
Specify a name for the cluster.
Provide appropriate values for the fields based on where you are deploying the Sensor.
Enter the Central API Endpoint, including the address and the port number. You can view this information again in the Red Hat Hybrid Cloud Console by choosing Advanced Cluster Security → ACS Instances, and then clicking the ACS instance you created.
click Next to continue with the Sensor setup.
click Download YAML File and Keys to download the cluster bundle (zip archive).
The cluster bundle zip archive includes unique configurations and keys for each cluster. Do not reuse the same files in another cluster. |
From a system that has access to the monitored cluster, unzip and run the sensor
script from the cluster bundle:
$ unzip -d sensor sensor-<cluster_name>.zip
$ ./sensor/sensor.sh
If you get a warning that you do not have the required permissions to deploy Sensor, follow the on-screen instructions, or contact your cluster administrator for assistance.
After Sensor is deployed, it contacts Central and provides cluster information.
Return to the RHACS portal and check if the deployment is successful. If successful, when viewing your list of clusters in Platform Configuration → Clusters, the cluster status displays a green checkmark and a Healthy status. If you do not see a green checkmark, use the following command to check for problems:
On OpenShift Container Platform, enter the following command:
$ oc get pod -n stackrox -w
On Kubernetes, enter the following command:
$ kubectl get pod -n stackrox -w
click Finish to close the window.
After installation, Sensor starts reporting security information to RHACS and the RHACS portal dashboard begins showing deployments, images, and policy violations from the cluster on which you have installed the Sensor.
Verify installation by ensuring that your secured clusters can communicate with the ACS instance.