export GOVC_URL='vCenter IP OR FQDN'
export GOVC_USeRNAMe='vCenter User'
export GOVC_PASSWORD='vCenter Password'
export GOVC_INSeCURe=1
You can configure OpenShift Container Platform to access VMware vSphere VMDK Volumes. This includes using VMware vSphere VMDK Volumes as persistent storage for application data.
The vSphere Cloud Provider allows using vSphere managed storage within OpenShift Container Platform and supports:
Volumes
Persistent Volumes
Storage Classes and provisioning of volumes
enabling VMware vSphere requires installing the VMware Tools on each Node VM. See Installing VMware tools for more information. |
To enable VMware vSphere cloud provider for OpenShift Container Platform:
Create a VM folder and move OpenShift Container Platform Node VMs to this folder.
Verify that the Node VM names complies with the regex [a-z](()?[0-9a-z])?(\.[a-z0-9](([-0-9a-z])?[0-9a-z])?)*
.
VM Names can not:
|
Set the disk.enableUUID
parameter to TRUe
for each Node VM. This ensures that the VMDK always presents a consistent UUID to the VM, allowing the disk to be mounted properly. For every virtual machine node that will be participating in the cluster, follow the steps below using the GOVC tool:
Set up the GOVC environment:
export GOVC_URL='vCenter IP OR FQDN'
export GOVC_USeRNAMe='vCenter User'
export GOVC_PASSWORD='vCenter Password'
export GOVC_INSeCURe=1
Find the Node VM paths:
govc ls /datacenter/vm/<vm-folder-name>
Set disk.enableUUID to true for all VMs:
govc vm.change -e="disk.enableUUID=1" -vm='VM Path'
If OpenShift Container Platform node VMs are created from a template VM, then
|
Create and assign roles to the vSphere Cloud Provider user and vSphere entities. vSphere Cloud Provider requires the following privileges to interact with vCenter. See the vSphere Documentation Center for steps to create a custom role, user and role assignment.
Roles | Privileges | entities | Propagate to Children |
---|---|---|---|
manage-k8s-node-vms |
Resource.AssignVMToPool System.Anonymous System.Read System.View VirtualMachine.Config.AddexistingDisk VirtualMachine.Config.AddNewDisk VirtualMachine.Config.AddRemoveDevice VirtualMachine.Config.RemoveDisk VirtualMachine.Inventory.Create VirtualMachine.Inventory.Delete |
Cluster, Hosts, VM Folder |
Yes |
manage-k8s-volumes |
Datastore.AllocateSpace Datastore.FileManagement System.Anonymous System.Read System.View |
Datastore |
No |
k8s-system-read-and-spbm-profile-view |
StorageProfile.View System.Anonymous System.Read System.View |
vCenter |
No |
ReadOnly |
System.Anonymous System.Read System.View |
Datacenter, Datastore Cluster, Datastore Storage Folder |
No |
After enabling the vSphere Cloud Provider, node names are set to the VM names from the vCenter Inventory. |
The |
You can configure OpenShift Container Platform for VMware vSphere (VCP) by modifying the Ansible inventory file during installation or after installation.
Add the following section to the Ansible inventory file:
[OSev3:vars] openshift_cloudprovider_kind=vsphere openshift_cloudprovider_vsphere_username=administrator@vsphere.local (1) openshift_cloudprovider_vsphere_password=<password> openshift_cloudprovider_vsphere_host=10.x.y.32 (2) openshift_cloudprovider_vsphere_datacenter=<Datacenter> (3) openshift_cloudprovider_vsphere_datastore=<Datastore> (4)
1 | The user name with the appropriate permissions to create and attach disks in vSphere. |
2 | The vCenter server address. |
3 | The vCenter Datacenter name where the OpenShift Container Platform VMs are located. |
4 | The datastore used for creating VMDKs. |
Configuring OpenShift Container Platform for VMware vSphere requires the /etc/origin/cloudprovider/vsphere.conf file, on each node host.
If the file does not exist, create it, and add the following:
[Global] (1) user = "myusername" (2) password = "mypassword" (3) port = "443" (4) insecure-flag = "1" (5) datacenter = "mydatacenter" (6) [VirtualCenter "1.2.3.4"] (7) user = "myvCenterusername" password = "password" [VirtualCenter "1.2.3.5"] port = "448" insecure-flag = "0" [Workspace] (8) server = "10.10.0.2" (9) datacenter = "mydatacenter" folder = "path/to/file" (10) datastore = "mydatastore" (11) resourcepool-path = "myresourcepoolpath" (12) [Disk] scsicontrollertype = pvscsi [Network] public-network = "VM Network" (13)
1 | Any properties set in the [Global] section are used for all specified vcenters unless overridden by the settings in the individual [VirtualCenter] sections. |
2 | vCenter username for the vSphere cloud provider. |
3 | vCenter password for the specified user. |
4 | Optional. Port number for the vCenter server. Defaults to port 443 . |
5 | Set to 1 if the vCenter uses a self-signed cert. |
6 | Name of the data center on which Node VMs are deployed. |
7 | Override specific [Global] properties for this Virtual Center. Possible setting scan be [Port] , [user] , [insecure-flag] , [datacenters] . Any settings not specified are pulled from the [Global] section. |
8 | Set any properties used for various vSphere Cloud Provider functionality. For example, dynamic provisioning, Storage Profile Based Volume provisioning, and others. |
9 | IP Address or FQDN for the vCenter server. |
10 | Path to the VM directory for node VMs. |
11 | Set to the name of the datastore to use for provisioning volumes using the storage classes or dynamic provisioning. If the datastore is located in a storage directory or is a member of a datastore cluster, you must specify the full path. |
12 | Optional. Set to the path to the resource pool where dummy VMs for Storage Profile Based volume provisioning should be created. |
13 | Set to the network port group for vSphere to access the node, which is called VM Network by default. This is the node host’s externalIP that is registered with Kubernetes. |
edit or
create
the master configuration file on all masters
(/etc/origin/master/master-config.yaml by default) and update the contents
of the apiServerArguments
and controllerArguments
sections with the
following:
kubernetesMasterConfig:
admissionConfig:
pluginConfig:
{}
apiServerArguments:
cloud-provider:
- "vsphere"
cloud-config:
- "/etc/origin/cloudprovider/vsphere.conf"
controllerArguments:
cloud-provider:
- "vsphere"
cloud-config:
- "/etc/origin/cloudprovider/vsphere.conf"
When triggering a containerized installation, only the /etc/origin and /var/lib/origin directories are mounted to the master and node container. Therefore, master-config.yaml must be in /etc/origin/master rather than /etc/. |
edit or
create
the node configuration file on all nodes (/etc/origin/node/node-config.yaml
by default) and update the contents of the kubeletArguments
section:
kubeletArguments:
cloud-provider:
- "vsphere"
cloud-config:
- "/etc/origin/cloudprovider/vsphere.conf"
When triggering a containerized installation, only the /etc/origin and /var/lib/origin directories are mounted to the master and node container. Therefore, node-config.yaml must be in /etc/origin/node rather than /etc/. |
Start or restart OpenShift Container Platform services on all master and node hosts to apply your configuration changes, see Restarting OpenShift Container Platform services:
# systemctl restart atomic-openshift-master-api atomic-openshift-master-controllers # systemctl restart atomic-openshift-node
Switching from not using a cloud provider to using a cloud provider produces an
error message. Adding the cloud provider tries to delete the node because the
node switches from using the hostname as the externalID
(which would have
been the case when no cloud provider was being used) to using the cloud
provider’s instance-id
(which is what the cloud provider specifies). To
resolve this issue:
Log in to the CLI as a cluster administrator.
Check and back up existing node labels:
$ oc describe node <node_name> | grep -Poz '(?s)Labels.*\n.*(?=Taints)'
Delete the nodes:
$ oc delete node <node_name>
On each node host, restart the OpenShift Container Platform service.
# systemctl restart atomic-openshift-node
Add back any labels on each node that you previously had.
OpenShift Container Platform provisions new volumes as independent persistent disks to freely attach and detach the volume on any node in the cluster. As a consequence, it is not possible to back up volumes that use snapshots.
To create a backup of PVs:
Stop the application using the PV.
Clone the persistent disk.
Restart the application.
Create a backup of the cloned disk.
Delete the cloned disk.