tenantId: <> (1) subscriptionId: <> (2) aadClientId: <> (3) aadClientSecret: <> (4) aadTenantId: <> (5) resourceGroup: <> (6) location: <> (7)
OKD can be configured to access a Microsoft Azure infrastructure, including using Azure disk as persistent storage for application data. After Microsoft Azure is configured properly, some additional configurations need to be completed on the OKD hosts.
Configuring Microsoft Azure for OKD requires the following role:
Contributor |
To create and manage all types of Microsoft Azure resources. |
For more information about adding administrator roles, see Add or change Azure subscription administrators.
If you are using Microsoft Azure Disk as a persistent volume on the OKD version 3.5 or later, you must enable Azure Cloud Provider.
All OKD node virtual machines (VMs) running in Microsoft Azure must belong to a single resource group.
Microsoft Azure VMs must be named the same as OKD nodes and this cannot include capital letters.
If you plan to use Azure Managed Disks:
OKD version 3.7 or later is required.
You must create VMs with Azure Managed Disks.
If you plan to use unmanaged disks:
You must create VMs with unmanaged disks.
If you are using a custom DNS configuration for your OKD cluster or your cluster nodes are in different Microsoft Azure Virtual Networks (VNet), you must configure DNS so that each node in the cluster can resolve IP addresses for other nodes.
Configuring OKD for Azure requires the /etc/azure/azure.conf file, on each node host.
If the file does not exist, create it, and add the following:
tenantId: <> (1) subscriptionId: <> (2) aadClientId: <> (3) aadClientSecret: <> (4) aadTenantId: <> (5) resourceGroup: <> (6) location: <> (7)
1 | The AAD tenant ID for the subscription that the cluster is deployed in. |
2 | The Azure subscription ID that the cluster is deployed in. |
3 | The client ID for an AAD application with RBAC access to talk to Azure RM APIs. |
4 | The client secret for an AAD application with RBAC access to talk to Azure RM APIs. |
5 | Ensure this is the same as tenant ID (optional). |
6 | The Azure Resource Group name that Azure VM belongs to. |
7 | The compact style Azure region, for example southeastasia (optional). |
Edit or
create the
master configuration file on all masters
(/etc/origin/master/master-config.yaml by default) and update the
contents of the apiServerArguments
and controllerArguments
sections:
kubernetesmasterConfig:
...
apiServerArguments:
cloud-provider:
- "azure"
cloud-config:
- "/etc/azure/azure.conf"
controllerArguments:
cloud-provider:
- "azure"
cloud-config:
- "/etc/azure/azure.conf"
When triggering a containerized installation, only the /etc/origin and /var/lib/origin directories are mounted to the master and node container. Therefore, master-config.yaml should be in /etc/origin/master instead of /etc/. |
Edit or
create
the node configuration file on all nodes (/etc/origin/node/node-config.yaml
by default) and update the contents of the kubeletArguments
section:
kubeletArguments:
cloud-provider:
- "azure"
cloud-config:
- "/etc/azure/azure.conf"
When triggering a containerized installation, only the /etc/origin and /var/lib/origin directories are mounted to the master and node container. Therefore, node-config.yaml should be in /etc/origin/node instead of /etc/. |
Start or restart OKD services on all master and node hosts to apply your configuration changes, see Restarting OKD services:
# systemctl restart origin-master-api origin-master-controllers # systemctl restart origin-node
Switching from not using a cloud provider to using a cloud provider produces an
error message. Adding the cloud provider tries to delete the node because the
node switches from using the hostname as the externalID
(which would have
been the case when no cloud provider was being used) to using the cloud
provider’s instance-id
(which is what the cloud provider specifies). To
resolve this issue:
Log in to the CLI as a cluster administrator.
Check and back up existing node labels:
$ oc describe node <node_name> | grep -Poz '(?s)Labels.*\n.*(?=Taints)'
Delete the nodes:
$ oc delete node <node_name>
On each node host, restart the OKD service.
# systemctl restart origin-node
Add back any labels on each node that you previously had.