This is a cache of https://docs.okd.io/3.9/install_config/configuring_gce.html. It is a snapshot of the page at 2024-11-22T03:53:22.395+0000.
Configuring for GCE | Installation and Configuration | OKD 3.9
×

Overview

OKD can be configured to access a Google Compute Engine (GCE) infrastructure, including using GCE volumes as persistent storage for application data. After GCE is configured properly, some additional configurations will need to be completed on the OKD hosts.

Permissions

Configuring GCE for OKD requires the following role:

roles/owner

To create service accounts, cloud storage, instances, images, templates, Cloud DNS entries, and deploy load balancers and health checks. It is helpful to also have delete permissions to be able to redeploy the environment while testing.

Configuring masters

You can set the GCE configuration on your OKD master hosts in two ways:

Configuring OKD masters for GCE with Ansible

During advanced installations, GCE can be configured using the openshift_cloudprovider_kind parameter, which is configurable in the inventory file.

  • When using GCE, the openshift_gcp_project and openshift_gcp_prefix parameters must be defined.

  • For running load balancer services using the Google compute platform, the nodes (Compute Engine VM instances) must be tagged with <openshift_gcp_prefix>ocp (add ocp suffix). For example, if the value of openshift_gcp_prefix parameter is set to mycluster, the nodes must be tagged with myclusterocp. See Adding and Removing Network Tags for more information on how to add network tags to Compute Engine VM instances.

Example GCE Configuration with Ansible
# Cloud Provider Configuration
# openshift_cloudprovider_kind=gce
# openshift_gcp_project=<projectid> (1)
# openshift_gcp_prefix=<uid> (2)
# openshift_gcp_multizone=False (3)
1 openshift_gcp_project is the project-id.
2 openshift_gcp_prefix is a unique string to identify each OpenShift cluster.
3 Use the openshift_gcp_multizone parameter to trigger multizone deployments on GCE. Its default value is False.

When Ansible configures GCE, the following files are created for you:

  • /etc/origin/cloudprovider/gce.conf

  • /etc/origin/master/master-config.yaml

  • /etc/origin/node/node-config.yaml

The advanced installation configures single-zone support by default. If you want multizone support, edit the /etc/origin/cloudprovider/gce.conf as shown in Configuring Multizone Support in a GCE Deployment.

Manually Configuring OKD masters for GCE

To configure the OKD masters for GCE:

  1. Edit or create the master configuration file (/etc/origin/master/master-config.yaml by default) on all masters and update the contents of the apiServerArguments and controllerArguments sections:

    kubernetesmasterConfig:
      ...
      apiServerArguments:
        cloud-provider:
          - "gce"
        cloud-config:
          - "/etc/origin/cloudprovider/gce.conf"
      controllerArguments:
        cloud-provider:
          - "gce"
        cloud-config:
          - "/etc/origin/cloudprovider/gce.conf"

    When triggering a containerized installation, only the directories of /etc/origin and /var/lib/origin are mounted to the master and node container. Therefore, master-config.yaml should be in /etc/origin/master instead of /etc/.

  2. Start or restart the OKD services:

    # systemctl restart origin-master-api origin-master-controllers

Configuring Nodes

To configure the OKD nodes for GCE:

  1. Edit or create the node configuration file (/etc/origin/node/node-config.yaml by default) on all nodes and update the contents of the kubeletArguments section:

    kubeletArguments:
      cloud-provider:
        - "gce"
      cloud-config:
        - "/etc/origin/cloudprovider/gce.conf"

Currently, the nodeName must match the instance name in GCE in order for the cloud provider integration to work properly. The name must also be RFC1123 compliant.

When triggering a containerized installation, only the directories of /etc/origin and /var/lib/origin are mounted to the master and node container. Therefore, node-config.yaml should be in /etc/origin/node instead of /etc/.

  1. Start or restart the OKD services all nodes.

    # systemctl restart origin-node

Configuring Multizone Support in a GCE Deployment

If manually congifuring GCE, multizone support is not configured by default.

The advanced installation configures single-zone support by default.

If you want multizone support:

  1. Edit or create a /etc/origin/cloudprovider/gce.conf file on all of your OKD hosts, both masters and nodes.

  2. Add the following contents:

    [Global]
    project-id = <project-id>
    network-name = <network-name>
    node-tags = <node-tags>
    node-instance-prefix = <instance-prefix>
    multizone = true

To return to single-zone support, set the multizone value to false.

Applying Configuration Changes

Start or restart OKD services on all master and node hosts to apply your configuration changes, see Restarting OKD services:

# systemctl restart origin-master-api origin-master-controllers
# systemctl restart origin-node

Switching from not using a cloud provider to using a cloud provider produces an error message. Adding the cloud provider tries to delete the node because the node switches from using the hostname as the externalID (which would have been the case when no cloud provider was being used) to using the cloud provider’s instance-id (which is what the cloud provider specifies). To resolve this issue:

  1. Log in to the CLI as a cluster administrator.

  2. Check and back up existing node labels:

    $ oc describe node <node_name> | grep -Poz '(?s)Labels.*\n.*(?=Taints)'
  3. Delete the nodes:

    $ oc delete node <node_name>
  4. On each node host, restart the OKD service.

    # systemctl restart origin-node
  5. Add back any labels on each node that you previously had.