# Cloud Provider Configuration # openshift_cloudprovider_kind=gce # openshift_gcp_project=<projectid> (1) # openshift_gcp_prefix=<uid> (2) # openshift_gcp_multizone=False (3)
OKD can be configured to access a Google Compute Engine (GCE) infrastructure, including using GCE volumes as persistent storage for application data. After GCE is configured properly, some additional configurations will need to be completed on the OKD hosts.
Configuring GCE for OKD requires the following role:
roles/owner |
To create service accounts, cloud storage, instances, images, templates, Cloud
DNS entries, and deploy load balancers and health checks. It is helpful to also
have |
You can set the GCE configuration on your OKD master hosts in two ways:
During advanced installations, GCE can be configured using the openshift_cloudprovider_kind
parameter, which is configurable in the inventory file.
|
# Cloud Provider Configuration # openshift_cloudprovider_kind=gce # openshift_gcp_project=<projectid> (1) # openshift_gcp_prefix=<uid> (2) # openshift_gcp_multizone=False (3)
1 | openshift_gcp_project is the project-id. |
2 | openshift_gcp_prefix is a unique string to identify each OpenShift
cluster. |
3 | Use the openshift_gcp_multizone parameter to trigger multizone deployments on GCE. Its default value is False . |
When Ansible configures GCE, the following files are created for you:
|
The advanced installation configures single-zone support by default. If you want multizone support, edit the /etc/origin/cloudprovider/gce.conf as shown in Configuring Multizone Support in a GCE Deployment.
To configure the OKD masters for GCE:
Edit or
create the
master configuration file (/etc/origin/master/master-config.yaml by default) on all masters and update the
contents of the apiServerArguments
and controllerArguments
sections:
kubernetesmasterConfig:
...
apiServerArguments:
cloud-provider:
- "gce"
cloud-config:
- "/etc/origin/cloudprovider/gce.conf"
controllerArguments:
cloud-provider:
- "gce"
cloud-config:
- "/etc/origin/cloudprovider/gce.conf"
When triggering a containerized installation, only the directories of /etc/origin and /var/lib/origin are mounted to the master and node container. Therefore, master-config.yaml should be in /etc/origin/master instead of /etc/. |
Start or restart the OKD services:
# systemctl restart origin-master-api origin-master-controllers
To configure the OKD nodes for GCE:
Edit or
create
the node configuration file (/etc/origin/node/node-config.yaml
by default) on all nodes and update the contents of the kubeletArguments
section:
kubeletArguments:
cloud-provider:
- "gce"
cloud-config:
- "/etc/origin/cloudprovider/gce.conf"
Currently, the nodeName
must match the instance name in GCE in order
for the cloud provider integration to work properly. The name must also be
RFC1123 compliant.
When triggering a containerized installation, only the directories of /etc/origin and /var/lib/origin are mounted to the master and node container. Therefore, node-config.yaml should be in /etc/origin/node instead of /etc/. |
Start or restart the OKD services all nodes.
# systemctl restart origin-node
If manually congifuring GCE, multizone support is not configured by default.
The advanced installation configures single-zone support by default. |
If you want multizone support:
Edit or create a /etc/origin/cloudprovider/gce.conf file on all of your OKD hosts, both masters and nodes.
Add the following contents:
[Global] project-id = <project-id> network-name = <network-name> node-tags = <node-tags> node-instance-prefix = <instance-prefix> multizone = true
To return to single-zone support, set the multizone
value to false
.
Start or restart OKD services on all master and node hosts to apply your configuration changes, see Restarting OKD services:
# systemctl restart origin-master-api origin-master-controllers # systemctl restart origin-node
Switching from not using a cloud provider to using a cloud provider produces an
error message. Adding the cloud provider tries to delete the node because the
node switches from using the hostname as the externalID
(which would have
been the case when no cloud provider was being used) to using the cloud
provider’s instance-id
(which is what the cloud provider specifies). To
resolve this issue:
Log in to the CLI as a cluster administrator.
Check and back up existing node labels:
$ oc describe node <node_name> | grep -Poz '(?s)Labels.*\n.*(?=Taints)'
Delete the nodes:
$ oc delete node <node_name>
On each node host, restart the OKD service.
# systemctl restart origin-node
Add back any labels on each node that you previously had.