$ ocm gcp create wif-config --name <wif_name> \ (1)
--project <gcp_project_id> \ (2)
Workload Identity Federation (WIF) is a Google Cloud Platform (GCP) Identity and Access Management (IAM) feature that provides third parties a secure method to access resources on a customer’s cloud account. WIF eliminates the need for service account keys, and is Google Cloud’s preferred method of credential authentication.
While service account keys can provide powerful access to your Google Cloud resources, they must be maintained by the end user and can be a security risk if they are not managed properly. WIF does not use service keys as an access method for your Google cloud resources. Instead, WIF grants access by using credentials from external identity providers to generate short-lived credentials for workloads. The workloads can then use these credentials to temporarily impersonate service accounts and access Google Cloud resources. This removes the burden of having to properly maintain service account keys, and removes the risk of unauthorized users gaining access to service account keys.
The following bulleted items provides a basic overview of the Workload Identity Federation process:
The owner of the Google Cloud Platform (GCP) project configures a workload identity pool with an identity provider, allowing OpenShift Dedicated to access the project’s associated service accounts using short-lived credentials.
This workload identity pool is configured to authenticate requests using an Identity Provider (IP) that the user defines.
For applications to get access to cloud resources, they first pass credentials to Google’s Security Token Service (STS). STS uses the specified identity provider to verify the credentials.
Once the credentials are verified, STS returns a temporary access token to the caller, giving the application the ability to impersonate the service account bound to that identity.
Operators also need access to cloud resources. By using WIF instead of service account keys to grant this access, cluster security is further strengthened, as service account keys are no longer stored in the cluster. Instead, operators are given temporary access tokens that impersonate the service accounts. These tokens are short-lived and regularly rotated.
For more information about Workload Identity Federation, see the Google Cloud Platform documentation.
Workload Identity Federation (WIF) is only supported on OpenShift Dedicated version 4.17 and later. |
You must complete the following prerequisites before Creating a Workload Identity Federation cluster using OpenShift Cluster Manager and Creating a Workload Identity Federation cluster using the OCM CLI.
You have confirmed your Google Cloud account has the necessary resource quotas and limits to support your desired cluster size according to the cluster resource requirements.
For more information regarding resource quotas and limits, see Additional resources. |
You have reviewed the introduction to OpenShift Dedicated and the documentation on architecture concepts.
You have reviewed the OpenShift Dedicated cloud deployment options.
You have read and completed the Required customer procedure.
WIF supports the deployment of a private OpenShift Dedicated on Google Cloud Platform (GCP) cluster with Private Service Connect (PSC). Red Hat recommends using PSC when deploying private clusters. For more information about the prerequisites for PSC, see Prerequisites for Private Service Connect. |
Log in to OpenShift Cluster Manager and click Create cluster on the OpenShift Dedicated card.
Under Billing model, configure the subscription type and infrastructure type.
Workload Identity Federation is supported by the Customer Cloud Subscription (CCS) infrastructure type only. |
Select a subscription type. For information about OpenShift Dedicated subscription options, see Cluster subscriptions and registration in the OpenShift Cluster Manager documentation.
Select the Customer cloud subscription infrastructure type.
Click Next.
Select Run on Google Cloud Platform.
Select Workload Identity Federation as the Authentication type.
Read and complete all the required prerequisites.
Click the checkbox indicating that you have read and completed all the required prerequisites.
To create a new WIF configuration, open a terminal window and run the following OCM CLI command.
$ ocm gcp create wif-config --name <wif_name> \ (1)
--project <gcp_project_id> \ (2)
1 | Replace <wif_name> with the name of your WIF configuration. |
2 | Replace <gcp_project_id> with the ID of the Google Cloud Platform (GCP) project where the WIF configuration will be implemented. |
Select a configured WIF configuration from the WIF configuration drop-down list. If you want to select the WIF configuration you created in the last step, click Refresh first.
Click Next.
On the Details page, provide a name for your cluster and specify the cluster details:
In the Cluster name field, enter a name for your cluster.
Optional: Cluster creation generates a domain prefix as a subdomain for your provisioned cluster on openshiftapps.com
. If the cluster name is less than or equal to 15 characters, that name is used for the domain prefix. If the cluster name is longer than 15 characters, the domain prefix is randomly generated as a 15-character string.
To customize the subdomain prefix, select the Create custom domain prefix checkbox, and enter your domain prefix name in the Domain prefix field. The domain prefix cannot be longer than 15 characters, must be unique within your organization, and cannot be changed after cluster creation.
Select a cluster version from the Version drop-down menu.
Workload Identity Federation (WIF) is only supported on OpenShift Dedicated version 4.17 and later. |
Select a cloud provider region from the Region drop-down menu.
Select a Single zone or Multi-zone configuration.
Optional: Select Enable Secure Boot for Shielded VMs to use Shielded VMs when installing your cluster. For more information, see Shielded VMs.
To successfully create a cluster, you must select Enable Secure Boot support for Shielded VMs if your organization has the policy constraint |
Leave Enable user workload monitoring selected to monitor your own projects in isolation from Red Hat Site Reliability Engineer (SRE) platform metrics. This option is enabled by default.
Optional: Expand Advanced Encryption to make changes to encryption settings.
Select Use custom KMS keys to use custom KMS keys. If you prefer not to use custom KMS keys, leave the default setting Use default KMS Keys.
With Use Custom KMS keys selected:
Select a key ring location from the Key ring location drop-down menu.
Select a key ring from the Key ring drop-down menu.
Select a key name from the Key name drop-down menu.
Provide the KMS Service Account.
Optional: Select Enable fips cryptography if you require your cluster to be fips validated.
If Enable fips cryptography is selected, Enable additional etcd encryption is enabled by default and cannot be disabled. You can select Enable additional etcd encryption without selecting Enable fips cryptography. |
Optional: Select Enable additional etcd encryption if you require etcd key value encryption. With this option, the etcd key values are encrypted, but not the keys. This option is in addition to the control plane storage encryption that encrypts the etcd volumes in OpenShift Dedicated clusters by default.
By enabling etcd encryption for the key values in etcd, you incur a performance overhead of approximately 20%. The overhead is a result of introducing this second layer of encryption, in addition to the default control plane storage encryption that encrypts the etcd volumes. Consider enabling etcd encryption only if you specifically require it for your use case. |
Click Next.
On the Machine pool page, select a Compute node instance type and a Compute node count. The number and types of nodes that are available depend on your OpenShift Dedicated subscription. If you are using multiple availability zones, the compute node count is per zone.
Optional: Expand Add node labels to add labels to your nodes. Click Add additional label to add more node labels.
This step refers to labels within Kubernetes, not Google Cloud. For more information regarding Kubernetes labels, see Labels and Selectors. |
Click Next.
In the Cluster privacy dialog, select Public or Private to use either public or private API endpoints and application routes for your cluster. If you select Private, Use Private Service Connect is selected by default, and cannot be disabled. Private Service Connect (PSC) is Google Cloud’s security-enhanced networking feature.
Optional: To install the cluster in an existing GCP Virtual Private Cloud (VPC):
Select Install into an existing VPC.
Private Service Connect is supported only with Install into an existing VPC. |
If you are installing into an existing VPC and you want to enable an HTTP or HTTPS proxy for your cluster, select Configure a cluster-wide proxy.
In order to configure a cluster-wide proxy for your cluster, you must first create the Cloud network address translation (NAT) and a Cloud router. See the Additional resources section for more information. |
Accept the default application ingress settings, or to create your own custom settings, select Custom Settings.
Optional: Provide route selector.
Optional: Provide excluded namespaces.
Select a namespace ownership policy.
Select a wildcard policy.
For more information about custom application ingress settings, click on the information icon provided for each setting.
Click Next.
Optional: To install the cluster into a GCP Shared VPC, follow these steps.
The VPC owner of the host project must enable a project as a host project in their Google Cloud console and add the Computer Network Administrator, Compute Security Administrator, and DNS Administrator roles to the following service accounts prior to cluster installation:
Failure to do so will cause the cluster go into the "Installation Waiting" state. If this occurs, you must contact the VPC owner of the host project to assign the roles to the service accounts listed above. The VPC owner of the host project has 30 days to grant the listed permissions before the cluster creation fails. For more information, see Enable a host project and Provision Shared VPC. |
Select Install into GCP Shared VPC.
Specify the Host project ID. If the specified host project ID is incorrect, cluster creation fails.
If you opted to install the cluster in an existing GCP VPC, provide your Virtual Private Cloud (VPC) subnet settings and select Next. You must have created the Cloud network address translation (NAT) and a Cloud router. See Additional resources for information about Cloud NATs and Google VPCs.
If you are installing a cluster into a Shared VPC, the VPC name and subnets are shared from the host project. |
Click Next.
If you opted to configure a cluster-wide proxy, provide your proxy configuration details on the Cluster-wide proxy page:
Enter a value in at least one of the following fields:
Specify a valid HTTP proxy URL.
Specify a valid HTTPS proxy URL.
In the Additional trust bundle field, provide a PEM encoded X.509 certificate bundle. The bundle is added to the trusted certificate store for the cluster nodes. An additional trust bundle file is required if you use a TLS-inspecting proxy unless the identity certificate for the proxy is signed by an authority from the Red Hat Enterprise Linux CoreOS (RHCOS) trust bundle. This requirement applies regardless of whether the proxy is transparent or requires explicit configuration using the http-proxy
and https-proxy
arguments.
Click Next.
For more information about configuring a proxy with OpenShift Dedicated, see Configuring a cluster-wide proxy.
In the CIDR ranges dialog, configure custom classless inter-domain routing (CIDR) ranges or use the defaults that are provided.
CIDR configurations cannot be changed later. Confirm your selections with your network administrator before proceeding. If the cluster privacy is set to Private, you cannot access your cluster until you configure private connections in your cloud provider. |
On the Cluster update strategy page, configure your update preferences:
Choose a cluster update method:
Select Individual updates if you want to schedule each update individually. This is the default option.
Select Recurring updates to update your cluster on your preferred day and start time, when updates are available.
You can review the end-of-life dates in the update lifecycle documentation for OpenShift Dedicated. For more information, see OpenShift Dedicated update life cycle. |
Provide administrator approval based on your cluster update method:
Individual updates: If you select an update version that requires approval, provide an administrator’s acknowledgment and click Approve and continue.
Recurring updates: If you selected recurring updates for your cluster, provide an administrator’s acknowledgment and click Approve and continue. OpenShift Cluster Manager does not start scheduled y-stream updates for minor versions without receiving an administrator’s acknowledgment.
If you opted for recurring updates, select a preferred day of the week and upgrade start time in UTC from the drop-down menus.
Optional: You can set a grace period for Node draining during cluster upgrades. A 1 hour grace period is set by default.
Click Next.
In the event of critical security concerns that significantly impact the security or stability of a cluster, Red Hat Site Reliability Engineering (SRE) might schedule automatic updates to the latest z-stream version that is not impacted. The updates are applied within 48 hours after customer notifications are provided. For a description of the critical impact security rating, see Understanding Red Hat security ratings. |
Review the summary of your selections and click Create cluster to start the cluster installation. The installation takes approximately 30-40 minutes to complete.
Optional: On the Overview tab, you can enable the delete protection feature by selecting Enable, which is located directly under Delete Protection: Disabled. This will prevent your cluster from being deleted. To disable delete protection, select Disable. By default, clusters are created with the delete protection feature disabled.
You can monitor the progress of the installation in the Overview page for your cluster. You can view the installation logs on the same page. Your cluster is ready when the Status in the Details section of the page is listed as Ready.
You can create an OpenShift Dedicated on Google Cloud Platform (GCP) cluster with Workload Identity Federation (WIF) using the OpenShift Cluster Manager CLI (ocm
) in interactive or non-interactive mode.
To create a WIF-enabled cluster, the OpenShift Cluster Manager CLI ( |
Before creating the cluster, you must first create a WIF configuration.
Migrating an existing non-WIF cluster to a WIF configuration is not supported. This feature can only be enabled during new cluster creation. |
You can create a WIF configuration using the auto
mode or the manual
mode.
The auto
mode enables you to automatically create the service accounts for OpenShift Dedicated components as well as other IAM resources.
Alternatively, you can use the manual
mode. In manual
mode, you are provided with commands within a script.sh
file which you use to manually create the service accounts for OpenShift Dedicated components as well as other IAM resources.
Based on your mode preference, run one of the following commands to create a WIF configuration:
Create a WIF configuration in auto mode by running the following command:
$ ocm gcp create wif-config --name <wif_name> \ (1)
--project <gcp_project_id> \ (2)
1 | Replace <wif_name> with the name of your WIF configuration. |
2 | Replace <gcp_project_id> with the ID of the Google Cloud Platform (GCP) project where the WIF configuration will be implemented. |
2024/09/26 13:05:41 Creating workload identity configuration...
2024/09/26 13:05:47 Workload identity pool created with name 2e1kcps6jtgla8818vqs8tbjjls4oeub
2024/09/26 13:05:47 workload identity provider created with name oidc
2024/09/26 13:05:48 IAM service account osd-worker-oeub created
2024/09/26 13:05:49 IAM service account osd-control-plane-oeub created
2024/09/26 13:05:49 IAM service account openshift-gcp-ccm-oeub created
2024/09/26 13:05:50 IAM service account openshift-gcp-pd-csi-driv-oeub created
2024/09/26 13:05:50 IAM service account openshift-image-registry-oeub created
2024/09/26 13:05:51 IAM service account openshift-machine-api-gcp-oeub created
2024/09/26 13:05:51 IAM service account osd-deployer-oeub created
2024/09/26 13:05:52 IAM service account cloud-credential-operator-oeub created
2024/09/26 13:05:52 IAM service account openshift-cloud-network-c-oeub created
2024/09/26 13:05:53 IAM service account openshift-ingress-gcp-oeub created
2024/09/26 13:05:55 Role "osd_deployer_v4.17" updated
Create a WIF configuration in manual mode by running the following command:
$ ocm gcp create wif-config --name <wif_name> \ (1)
--project <gcp_project_id> \ (2)
--mode=manual
1 | Replace <wif_name> with the name of your WIF configuration. |
2 | Replace <gcp_project_id> with the ID of the Google Cloud Platform (GCP) project where the WIF configuration will be implemented. |
Once the WIF is configured, the following service accounts, roles, and groups are created.
Service Account/Group | GCP pre-defined roles and Red Hat custom roles |
---|---|
osd-deployer |
osd_deployer_v4.17 |
osd-control-plane |
|
osd-worker |
|
cloud-credential-operator-gcp-ro-creds |
cloud_credential_operator_gcp_ro_creds_v4.17 |
openshift-cloud-network-config-controller-gcp |
openshift_cloud_network_config_controller_gcp_v4.17 |
openshift-gcp-ccm |
openshift_gcp_ccm_v4.17 |
openshift-gcp-pd-csi-driver-operator |
|
openshift-image-registry-gcp |
openshift_image_registry_gcs_v4.17 |
openshift-ingress-gcp |
openshift_ingress_gcp_v4.17 |
openshift-machine-api-gcp |
openshift_machine_api_gcp_v4.17 |
Access via SRE group:sd-sre-platform-gcp-access |
sre_managed_support |
For further details about WIF configuration roles and their assigned permissions, see managed-cluster-config.
You can create a WIF cluster using the interactive
mode or the non-interactive
mode.
In interactive
mode, cluster attributes are displayed automatically as prompts during the creation of the cluster. You enter the values for those prompts based on specified requirements in the fields provided.
In non-interactive
mode, you specify the values for specific parameters within the command.
Based on your mode preference, run one of the following commands to create an OpenShift Dedicated on (GCP) cluster with WIF configuration:
Create a cluster in interactive mode by running the following command:
$ ocm create cluster --interactive (1)
1 | interactive mode enables you to specify configuration options at the interactive prompts. |
Create a cluster in non-interactive mode by running the following command:
The following example is made up optional and required parameters and may differ from your |
$ ocm create cluster <cluster_name> \ (1)
--provider=gcp \ (2)
--ccs=true \ (3)
--wif-config <wif_name> \ (4)
--region <gcp_region> \ (5)
--subscription-type=marketplace-gcp \ (6)
--marketplace-gcp-terms=true \ (7)
--version <version> \ (8)
--multi-az=true \ (9)
--enable-autoscaling=true \ (10)
--min-replicas=3 \ (11)
--max-replicas=6 \ (12)
--secure-boot-for-shielded-vms=true (13)
1 | Replace <cluster_name> with a name for your cluster. |
2 | Set value to gcp . |
3 | Set value to true . |
4 | Replace <wif_name> with the name of your WIF configuration. |
5 | Replace <gcp_region> with the Google Cloud Platform (GCP) region where the new cluster will be deployed. |
6 | Optional: The subscription billing model for the cluster. |
7 | Optional: If you provided a value of marketplace-gcp for the subscription-type parameter, marketplace-gcp-terms must be equal to true . |
8 | Optional: The desired OpenShift version. |
9 | Optional: Deploy to multiple data centers. |
10 | Optional: Enable autoscaling of compute nodes. |
11 | Optional: Minimum number of compute nodes. |
12 | Optional: Maximum number of compute nodes. |
13 | Optional: Secure Boot enables the use of Shielded VMs in the Google Cloud Platform. |
Updating a WIF configuration is only applicable for y-stream updates. For an overview of the update process, including details regarding version semantics, see The Ultimate Guide to OpenShift Release and Upgrade Process for Cluster Administrators. |
Before updating a WIF-enabled OpenShift Dedicated cluster to a newer version, you must update the wif-config to that version as well. If you do not update the wif-config version before attempting to update the cluster version, the cluster version update will fail.
You can update a wif-config to a specific OpenShift Dedicated version by running the following command:
ocm gcp update wif-config --version <version> \ (1)
--name <wif_name> (2)
1 | Replace <version> with the OpenShift Dedicated y-stream version you plan to update the cluster to. |
2 | Replace <wif_name> with the name of the WIF configuration you want to update. |
To list all of your OpenShift Dedicated clusters that have been deployed using the WIF authentication type, run the following command:
$ ocm list clusters --parameter search="gcp.authentication.wif_config_id != ''"
To list all of your OpenShift Dedicated clusters that have been deployed using a specific wif-config, run the following command:
$ ocm list clusters --parameter search="gcp.authentication.wif_config_id = '<wif_config_id>'" (1)
1 | Replace <wif_config_id> with the ID of the WIF configuration to list the clusters that have been deployed using that WIF configuration. |
For information about OpenShift Dedicated clusters using a Customer Cloud Subscription (CCS) model on Google Cloud Platform (GCP), see Customer requirements.
For information about resource quotas, Resource quotas per project.
For information about limits, GCP account limits.
For information about required APIs, see Required customer procedure.
For information about managing workload identity pools, see Manage workload identity pools and providers.
For information about managing roles and permissions in your Google Cloud account, see Roles and permissions.
For a list of the supported maximums, see Cluster maximums.