vCPU
/var
partitionIn OpenShift Container Platform version 4.9, you can install a cluster on Microsoft Azure Stack Hub by using infrastructure that you provide.
Several Azure Resource Manager (ARM) templates are provided to assist in completing these steps or to help model your own.
The steps for performing a user-provisioned infrastructure installation are provided as an example only. Installing a cluster with infrastructure you provide requires knowledge of the cloud provider and the installation process of OpenShift Container Platform. Several ARM templates are provided to assist in completing these steps or to help model your own. You are also free to create the required resources through other methods; the templates are just an example. |
You reviewed details about the OpenShift Container Platform installation and update processes.
You read the documentation on selecting a cluster installation method and preparing it for users.
You configured an Azure Stack Hub account to host the cluster.
You downloaded the Azure CLI and installed it on your computer. See Install the Azure CLI in the Azure documentation. The documentation below was tested using version 2.28.0
of the Azure CLI. Azure CLI commands might perform differently based on the version you use.
If you use a firewall and plan to use the Telemetry service, you configured the firewall to allow the sites that your cluster requires access to.
Be sure to also review this site list if you are configuring a proxy. |
In OpenShift Container Platform 4.9, you require access to the internet to install your cluster.
You must have internet access to:
Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster.
Access Quay.io to obtain the packages that are required to install your cluster.
Obtain the packages that are required to perform cluster updates.
If your cluster cannot have direct internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the required content and use it to populate a mirror registry with the installation packages. With some installation types, the environment that you install your cluster in will not require internet access. Before you update the cluster, you update the content of the mirror registry. |
Before you can install OpenShift Container Platform, you must configure an Azure project to host it.
All Azure Stack Hub resources that are available through public endpoints are subject to resource name restrictions, and you cannot create resources that use certain terms. For a list of terms that Azure Stack Hub restricts, see Resolve reserved resource name errors in the Azure documentation. |
The OpenShift Container Platform cluster uses a number of Microsoft Azure Stack Hub components, and the default Quota types in Azure Stack Hub affect your ability to install OpenShift Container Platform clusters.
The following table summarizes the Azure Stack Hub components whose limits can impact your ability to install and run OpenShift Container Platform clusters.
Component | Number of components required by default | Description | ||||||
---|---|---|---|---|---|---|---|---|
vCPU |
56 |
A default cluster requires 56 vCPUs, so you must increase the account limit. By default, each cluster creates the following instances:
Because the bootstrap, control plane, and worker machines use To deploy more worker nodes, enable autoscaling, deploy large workloads, or use a different instance type, you must further increase the vCPU limit for your account to ensure that your cluster can deploy the machines that you require. |
||||||
OS Disk |
7 |
VM OS disk must be able to sustain a tested and recommended minimum throughput of 5000 IOPS / 200MBps for control plane machines. This throughput can be provided by having a minimum of 1 TiB Premium SSD (P30). In Azure Stack Hub, disk performance is directly dependent on SSD disk sizes, so to achieve the throughput supported by Host caching must be set to |
||||||
VNet |
1 |
Each default cluster requires one Virtual Network (VNet), which contains two subnets. |
||||||
Network interfaces |
7 |
Each default cluster requires seven network interfaces. If you create more machines or your deployed workloads create load balancers, your cluster uses more network interfaces. |
||||||
Network security groups |
2 |
Each cluster creates network security groups for each subnet in the VNet. The default cluster creates network security groups for the control plane and for the compute node subnets:
|
||||||
Network load balancers |
3 |
Each cluster creates the following load balancers:
If your applications create more Kubernetes |
||||||
Public IP addresses |
2 |
The public load balancer uses a public IP address. The bootstrap machine also uses a public IP address so that you can SSH into the machine to troubleshoot issues during installation. The IP address for the bootstrap node is used only during installation. |
||||||
Private IP addresses |
7 |
The internal load balancer, each of the three control plane machines, and each of the three worker machines each use a private IP address. |
To successfully install OpenShift Container Platform on Azure Stack Hub, you must create dns records in an Azure Stack Hub dns zone. The dns zone must be authoritative for the domain. To delegate a registrar’s dns zone to Azure Stack Hub, see Microsoft’s documentation for Azure Stack Hub datacenter dns integration.
You can view Azure’s dns solution by visiting this example for creating dns zones.
Because your cluster has limited access to automatic machine management when you use infrastructure that you provision, you must provide a mechanism for approving cluster certificate signing requests (CSRs) after installation. The kube-controller-manager
only approves the kubelet client CSRs. The machine-approver
cannot guarantee the validity of a serving certificate that is requested by using kubelet credentials because it cannot confirm that the correct machine issued the request. You must determine and implement a method of verifying the validity of the kubelet serving certificate requests and approving them.
Your Microsoft Azure Stack Hub account must have the following roles for the subscription that you use:
Owner
To set roles on the Azure portal, see the Manage access to resources in Azure Stack Hub with role-based access control in the Microsoft documentation.
Because OpenShift Container Platform and its installation program create Microsoft Azure resources by using the Azure Resource Manager, you must create a service principal to represent it.
Install or update the Azure CLI.
Your Azure account has the required roles for the subscription that you use.
Register your environment:
$ az cloud register -n AzureStackCloud --endpoint-resource-manager <endpoint> (1)
1 | Specify the Azure Resource Manager endpoint, `https://management.<region>.<fqdn>/`. |
See the Microsoft documentation for details.
Set the active environment:
$ az cloud set -n AzureStackCloud
Update your environment configuration to use the specific API version for Azure Stack Hub:
$ az cloud update --profile 2019-03-01-hybrid
Log in to the Azure CLI:
$ az login
If you are in a multitenant environment, you must also supply the tenant ID.
If your Azure account uses subscriptions, ensure that you are using the right subscription:
View the list of available accounts and record the tenantId
value for the
subscription you want to use for your cluster:
$ az account list --refresh
[
{
"cloudName": AzureStackCloud",
"id": "9bab1460-96d5-40b3-a78e-17b15e978a80",
"isDefault": true,
"name": "Subscription Name",
"state": "Enabled",
"tenantId": "6057c7e9-b3ae-489d-a54e-de3f6bf6a8ee",
"user": {
"name": "you@example.com",
"type": "user"
}
}
]
View your active account details and confirm that the tenantId
value matches
the subscription you want to use:
$ az account show
{
"environmentName": AzureStackCloud",
"id": "9bab1460-96d5-40b3-a78e-17b15e978a80",
"isDefault": true,
"name": "Subscription Name",
"state": "Enabled",
"tenantId": "6057c7e9-b3ae-489d-a54e-de3f6bf6a8ee", (1)
"user": {
"name": "you@example.com",
"type": "user"
}
}
1 | Ensure that the value of the tenantId parameter is the correct subscription ID. |
If you are not using the right subscription, change the active subscription:
$ az account set -s <subscription_id> (1)
1 | Specify the subscription ID. |
Verify the subscription ID update:
$ az account show
{
"environmentName": AzureStackCloud",
"id": "33212d16-bdf6-45cb-b038-f6565b61edda",
"isDefault": true,
"name": "Subscription Name",
"state": "Enabled",
"tenantId": "8049c7e9-c3de-762d-a54e-dc3f6be6a7ee",
"user": {
"name": "you@example.com",
"type": "user"
}
}
Record the tenantId
and id
parameter values from the output. You need these values during the OpenShift Container Platform installation.
Create the service principal for your account:
$ az ad sp create-for-rbac --role Contributor --name <service_principal> \ (1)
--scopes /subscriptions/<subscription_id> (2)
--years <years> (3)
1 | Specify the service principal name. |
2 | Specify the subscription ID. |
3 | Specify the number of years. By default, a service principal expires in one year. By using the --years option you can extend the validity of your service principal. |
Creating 'Contributor' role assignment under scope '/subscriptions/<subscription_id>'
The output includes credentials that you must protect. Be sure that you do not
include these credentials in your code or check the credentials into your source
control. For more information, see https://aka.ms/azadsp-cli
{
"appId": "ac461d78-bf4b-4387-ad16-7e32e328aec6",
"displayName": <service_principal>",
"password": "00000000-0000-0000-0000-000000000000",
"tenantId": "8049c7e9-c3de-762d-a54e-dc3f6be6a7ee"
}
Record the values of the appId
and password
parameters from the previous
output. You need these values during OpenShift Container Platform installation.
For more information about CCO modes, see About the Cloud Credential Operator.
Before you install OpenShift Container Platform, download the installation file on a local computer.
You have a computer that runs Linux or macOS, with 500 MB of local disk space
Access the Infrastructure Provider page on the OpenShift Cluster Manager site. If you have a Red Hat account, log in with your credentials. If you do not, create an account.
Select Azure as the cloud provider if you are installing your cluster on Azure Stack Hub.
Navigate to the page for your installation type, download the installation program for your operating system, and place the file in the directory where you will store the installation configuration files.
The installation program creates several files on the computer that you use to install your cluster. You must keep the installation program and the files that the installation program creates after you finish installing the cluster. Both files are required to delete the cluster. |
Deleting the files created by the installation program does not remove your cluster, even if the cluster failed during installation. To remove your cluster, complete the OpenShift Container Platform uninstallation procedures for your specific cloud provider. |
Extract the installation program. For example, on a computer that uses a Linux operating system, run the following command:
$ tar -xvf openshift-install-linux.tar.gz
Download your installation pull secret from the Red Hat OpenShift Cluster Manager. This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components.
During an OpenShift Container Platform installation, you can provide an SSH public key to the installation program. The key is passed to the Red Hat Enterprise Linux CoreOS (RHCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys
list for the core
user on each node, which enables password-less authentication.
After the key is passed to the nodes, you can use the key pair to SSH in to the RHCOS nodes as the user core
. To access the nodes through SSH, the private key identity must be managed by SSH for your local user.
If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather
command also requires the SSH public key to be in place on the cluster nodes.
Do not skip this procedure in production environments, where disaster recovery and debugging is required. |
You must use a local key, not one that you configured with platform-specific approaches such as AWS key pairs. |
If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command:
$ ssh-keygen -t ed25519 -N '' -f <path>/<file_name> (1)
1 | Specify the path and file name, such as ~/.ssh/id_ed25519 , of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory. |
If you plan to install an OpenShift Container Platform cluster that uses FIPS Validated / Modules in Process cryptographic libraries on the |
View the public SSH key:
$ cat <path>/<file_name>.pub
For example, run the following to view the ~/.ssh/id_ed25519.pub
public key:
$ cat ~/.ssh/id_ed25519.pub
Add the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the ./openshift-install gather
command.
On some distributions, default SSH private key identities such as |
If the ssh-agent
process is not already running for your local user, start it as a background task:
$ eval "$(ssh-agent -s)"
Agent pid 31874
If your cluster is in FIPS mode, only use FIPS-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA. |
Add your SSH private key to the ssh-agent
:
$ ssh-add <path>/<file_name> (1)
1 | Specify the path and file name for your SSH private key, such as ~/.ssh/id_ed25519 |
Identity added: /home/<you>/<path>/<file_name> (<computer_name>)
When you install OpenShift Container Platform, provide the SSH public key to the installation program.
To install OpenShift Container Platform on Microsoft Azure Stack Hub using user-provisioned infrastructure, you must generate the files that the installation program needs to deploy your cluster and modify them so that the cluster creates only the machines that it will use. You manually create the install-config.yaml
file, and then generate and customize the Kubernetes manifests and Ignition config files. You also have the option to first set up a separate var
partition during the preparation phases of installation.
You have an SSH public key on your local machine to provide to the installation program. The key will be used for SSH authentication onto your cluster nodes for debugging and disaster recovery.
You have obtained the OpenShift Container Platform installation program and the pull secret for your cluster.
Create an installation directory to store your required installation assets in:
$ mkdir <installation_directory>
You must create a directory. Some installation assets, like bootstrap X.509 certificates have short expiration intervals, so you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. |
Customize the sample install-config.yaml
file template that is provided and save
it in the <installation_directory>
.
You must name this configuration file |
Make the following modifications for Azure Stack Hub:
Set the replicas
parameter to 0
for the compute
pool:
compute:
- hyperthreading: Enabled
name: worker
platform: {}
replicas: 0 (1)
1 | Set to 0 . |
The compute machines will be provisioned manually later.
Update the platform.azure
section of the install-config.yaml
file to configure your Azure Stack Hub configuration:
platform:
azure:
armEndpoint: <azurestack_arm_endpoint> (1)
baseDomainResourceGroupName: <resource_group> (2)
cloudName: AzureStackCloud (3)
region: <azurestack_region> (4)
1 | Specify the Azure Resource Manager endpoint of your Azure Stack Hub environment, like https://management.local.azurestack.external . |
2 | Specify the name of the resource group that contains the dns zone for your base domain. |
3 | Specify the Azure Stack Hub environment, which is used to configure the Azure SDK with the appropriate Azure API endpoints. |
4 | Specify the name of your Azure Stack Hub region. |
Back up the install-config.yaml
file so that you can use it to install
multiple clusters.
The |
You can customize the install-config.yaml
file to specify more details about your OpenShift Container Platform cluster’s platform or modify the values of the required parameters.
This sample YAML file is provided for reference only. Use it as a resource to enter parameter values into the installation configuration file that you created manually. |
apiVersion: v1
baseDomain: example.com
controlPlane: (1)
name: master
replicas: 3
compute: (1)
- name: worker
platform: {}
replicas: 0
metadata:
name: test-cluster (2)
networking:
clusterNetwork:
- cidr: 10.128.0.0/14
hostPrefix: 23
machineNetwork:
- cidr: 10.0.0.0/16
networkType: OpenShiftSDN
serviceNetwork:
- 172.30.0.0/16
platform:
azure:
armEndpoint: azurestack_arm_endpoint (3)
baseDomainResourceGroupName: resource_group (4)
region: azure_stack_local_region (5)
resourceGroupName: existing_resource_group (6)
outboundType: Loadbalancer
cloudName: AzureStackCloud (7)
pullSecret: '{"auths": ...}' (8)
fips: false (9)
additionalTrustBundle: | (10)
-----BEGIN CERTIFICATE-----
<MY_TRUSTED_CA_CERT>
-----END CERTIFICATE-----
sshKey: ssh-ed25519 AAAA... (11)
1 | The controlPlane section is a single mapping, but the compute section is a sequence of mappings. To meet the requirements of the different data structures, the first line of the compute section must begin with a hyphen, - , and the first line of the controlPlane section must not. Only one control plane pool is used. |
||
2 | Specify the name of the cluster. | ||
3 | Specify the Azure Resource Manager endpoint that your Azure Stack Hub operator provides. | ||
4 | Specify the name of the resource group that contains the dns zone for your base domain. | ||
5 | Specify the name of your Azure Stack Hub local region. | ||
6 | Specify the name of an already existing resource group to install your cluster to. If undefined, a new resource group is created for the cluster. | ||
7 | Specify the Azure Stack Hub environment as your target platform. | ||
8 | Specify the pull secret required to authenticate your cluster. | ||
9 | Whether to enable or disable FIPS mode. By default, FIPS mode is not enabled. If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead.
|
||
10 | If your Azure Stack Hub environment uses an internal certificate authority (CA), add the necessary certificate bundle in .pem format. |
||
11 | You can optionally provide the sshKey value that you use to access the machines in your cluster.
|
Production environments can deny direct access to the internet and instead have
an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform
cluster to use a proxy by configuring the proxy settings in the
install-config.yaml
file.
You have an existing install-config.yaml
file.
You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the Proxy
object’s spec.noProxy
field to bypass the proxy if necessary.
The For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the |
Edit your install-config.yaml
file and add the proxy settings. For example:
apiVersion: v1
baseDomain: my.domain.com
proxy:
httpProxy: http://<username>:<pswd>@<ip>:<port> (1)
httpsProxy: https://<username>:<pswd>@<ip>:<port> (2)
noProxy: example.com (3)
additionalTrustBundle: | (4)
-----BEGIN CERTIFICATE-----
<MY_TRUSTED_CA_CERT>
-----END CERTIFICATE-----
...
1 | A proxy URL to use for creating HTTP connections outside the cluster. The
URL scheme must be http . |
2 | A proxy URL to use for creating HTTPS connections outside the cluster. |
3 | A comma-separated list of destination domain names, IP addresses, or
other network CIDRs to exclude from proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass the proxy for all destinations. |
4 | If provided, the installation program generates a config map that is named user-ca-bundle in
the openshift-config namespace to hold the additional CA
certificates. If you provide additionalTrustBundle and at least one proxy setting, the Proxy object is configured to reference the user-ca-bundle config map in the trustedCA field. The Cluster Network
Operator then creates a trusted-ca-bundle config map that merges the contents specified for the trustedCA parameter
with the RHCOS trust bundle. The additionalTrustBundle field is required unless
the proxy’s identity certificate is signed by an authority from the RHCOS trust
bundle. |
The installation program does not support the proxy |
Save the file and reference it when installing OpenShift Container Platform.
The installation program creates a cluster-wide proxy that is named cluster
that uses the proxy
settings in the provided install-config.yaml
file. If no proxy settings are
provided, a cluster
Proxy
object is still created, but it will have a nil
spec
.
Only the |
You must export a common set of variables that are used with the provided Azure Resource Manager (ARM) templates used to assist in completing a user-provided infrastructure install on Microsoft Azure Stack Hub.
Specific ARM templates can also require additional exported variables, which are detailed in their related procedures. |
Obtain the OpenShift Container Platform installation program and the pull secret for your cluster.
Export common variables found in the install-config.yaml
to be used by the
provided ARM templates:
$ export CLUSTER_NAME=<cluster_name>(1)
$ export AZURE_REGION=<azure_region>(2)
$ export SSH_KEY=<ssh_key>(3)
$ export BASE_DOMAIN=<base_domain>(4)
$ export BASE_DOMAIN_RESOURCE_GROUP=<base_domain_resource_group>(5)
1 | The value of the .metadata.name attribute from the install-config.yaml file. |
2 | The region to deploy the cluster into. This is the value of the .platform.azure.region attribute from the install-config.yaml file. |
3 | The SSH RSA public key file as a string. You must enclose the SSH key in quotes since it contains spaces. This is the value of the .sshKey attribute from the install-config.yaml file. |
4 | The base domain to deploy the cluster to. The base domain corresponds to the dns zone that you created for your cluster. This is the value of the .baseDomain attribute from the install-config.yaml file. |
5 | The resource group where the dns zone exists. This is the value of the .platform.azure.baseDomainResourceGroupName attribute from the install-config.yaml file. |
For example:
$ export CLUSTER_NAME=test-cluster
$ export AZURE_REGION=centralus
$ export SSH_KEY="ssh-rsa xxx/xxx/xxx= user@email.com"
$ export BASE_DOMAIN=example.com
$ export BASE_DOMAIN_RESOURCE_GROUP=ocp-cluster
Export the kubeadmin credentials:
$ export KUBECONFIG=<installation_directory>/auth/kubeconfig (1)
1 | For <installation_directory> , specify the path to the directory that you stored the installation files in. |
Because you must modify some cluster definition files and manually start the cluster machines, you must generate the Kubernetes manifest and Ignition config files that the cluster needs to configure the machines.
The installation configuration file transforms into the Kubernetes manifests. The manifests wrap into the Ignition configuration files, which are later used to configure the cluster machines.
|
You obtained the OpenShift Container Platform installation program.
You created the install-config.yaml
installation configuration file.
Change to the directory that contains the OpenShift Container Platform installation program and generate the Kubernetes manifests for the cluster:
$ ./openshift-install create manifests --dir <installation_directory> (1)
1 | For <installation_directory> , specify the installation directory that
contains the install-config.yaml file you created. |
Remove the Kubernetes manifest files that define the control plane machines:
$ rm -f <installation_directory>/openshift/99_openshift-cluster-api_master-machines-*.yaml
By removing these files, you prevent the cluster from automatically generating control plane machines.
Remove the Kubernetes manifest files that define the worker machines:
$ rm -f <installation_directory>/openshift/99_openshift-cluster-api_worker-machineset-*.yaml
Because you create and manage the worker machines yourself, you do not need to initialize these machines.
Check that the mastersSchedulable
parameter in the <installation_directory>/manifests/cluster-scheduler-02-config.yml
Kubernetes manifest file is set to false
. This setting prevents pods from being scheduled on the control plane machines:
Open the <installation_directory>/manifests/cluster-scheduler-02-config.yml
file.
Locate the mastersSchedulable
parameter and ensure that it is set to false
.
Save and exit the file.
Optional: If you do not want
the Ingress Operator
to create dns records on your behalf, remove the privateZone
and publicZone
sections from the <installation_directory>/manifests/cluster-dns-02-config.yml
dns configuration file:
apiVersion: config.openshift.io/v1
kind: dns
metadata:
creationTimestamp: null
name: cluster
spec:
baseDomain: example.openshift.com
privateZone: (1)
id: mycluster-100419-private-zone
publicZone: (1)
id: example.openshift.com
status: {}
1 | Remove this section completely. |
If you do so, you must add ingress dns records manually in a later step.
Optional: If your Azure Stack Hub environment uses an internal certificate authority (CA), you must update the .spec.trustedCA.name
field in the <installation_directory>/manifests/cluster-proxy-01-config.yaml
file to use user-ca-bundle
:
...
spec:
trustedCA:
name: user-ca-bundle
...
Later, you must update your bootstrap ignition to include the CA.
When configuring Azure on user-provisioned infrastructure, you must export some common variables defined in the manifest files to use later in the Azure Resource Manager (ARM) templates:
Export the infrastructure ID by using the following command:
$ export INFRA_ID=<infra_id> (1)
1 | The OpenShift Container Platform cluster has been assigned an identifier (INFRA_ID ) in the form of <cluster_name>-<random_string> . This will be used as the base name for most resources created using the provided ARM templates. This is the value of the .status.infrastructureName attribute from the manifests/cluster-infrastructure-02-config.yml file. |
Export the resource group by using the following command:
$ export RESOURCE_GROUP=<resource_group> (1)
1 | All resources created in this Azure deployment exists as part of a resource group. The resource group name is also based on the INFRA_ID , in the form of <cluster_name>-<random_string>-rg . This is the value of the .status.platformStatus.azure.resourceGroupName attribute from the manifests/cluster-infrastructure-02-config.yml file. |
Manually create your cloud credentials.
From the directory that contains the installation program, obtain details of the OpenShift Container Platform release image that your openshift-install
binary is built to use:
$ openshift-install version
release image quay.io/openshift-release-dev/ocp-release:4.y.z-x86_64
Locate all CredentialsRequest
objects in this release image that target the cloud you are deploying on:
$ oc adm release extract quay.io/openshift-release-dev/ocp-release:4.y.z-x86_64 --credentials-requests --cloud=azure
This command creates a YAML file for each CredentialsRequest
object.
CredentialsRequest
objectapiVersion: cloudcredential.openshift.io/v1
kind: CredentialsRequest
metadata:
labels:
controller-tools.k8s.io: "1.0"
name: openshift-image-registry-azure
namespace: openshift-cloud-credential-operator
spec:
secretRef:
name: installer-cloud-credentials
namespace: openshift-image-registry
providerSpec:
apiVersion: cloudcredential.openshift.io/v1
kind: AzureProviderSpec
roleBindings:
- role: Contributor
Create YAML files for secrets in the openshift-install
manifests directory that you generated previously. The secrets must be stored using the namespace and secret name defined in the spec.secretRef
for each CredentialsRequest
object. The format for the secret data varies for each cloud provider.
Create a cco-configmap.yaml
file in the manifests directory with the Cloud Config Operator (CCO) disabled:
ConfigMap
objectapiVersion: v1
kind: ConfigMap
metadata:
name: cloud-credential-operator-config
namespace: openshift-cloud-credential-operator
annotations:
release.openshift.io/create-only: "true"
data:
disabled: "true"
To create the Ignition configuration files, run the following command from the directory that contains the installation program:
$ ./openshift-install create ignition-configs --dir <installation_directory> (1)
1 | For <installation_directory> , specify the same installation directory. |
Ignition config files are created for the bootstrap, control plane, and compute nodes in the installation directory. The kubeadmin-password
and kubeconfig
files are created in the ./<installation_directory>/auth
directory:
. ├── auth │ ├── kubeadmin-password │ └── kubeconfig ├── bootstrap.ign ├── master.ign ├── metadata.json └── worker.ign
/var
partitionIt is recommended that disk partitioning for OpenShift Container Platform be left to the installer. However, there are cases where you might want to create separate partitions in a part of the filesystem that you expect to grow.
OpenShift Container Platform supports the addition of a single partition to attach storage to either the /var
partition or a subdirectory of /var
. For example:
/var/lib/containers
: Holds container-related content that can grow as more images and containers are added to a system.
/var/lib/etcd
: Holds data that you might want to keep separate for purposes such as performance optimization of etcd storage.
/var
: Holds data that you might want to keep separate for purposes such as auditing.
Storing the contents of a /var
directory separately makes it easier to grow storage for those areas as needed and reinstall OpenShift Container Platform at a later date and keep that data intact. With this method, you will not have to pull all your containers again, nor will you have to copy massive log files when you update systems.
Because /var
must be in place before a fresh installation of Red Hat Enterprise Linux CoreOS (RHCOS), the following procedure sets up the separate /var
partition by creating a machine config manifest that is inserted during the openshift-install
preparation phases of an OpenShift Container Platform installation.
If you follow the steps to create a separate |
Create a directory to hold the OpenShift Container Platform installation files:
$ mkdir $HOME/clusterconfig
Run openshift-install
to create a set of files in the manifest
and openshift
subdirectories. Answer the system questions as you are prompted:
$ openshift-install create manifests --dir $HOME/clusterconfig
? SSH Public Key ...
INFO Credentials loaded from the "myprofile" profile in file "/home/myuser/.aws/credentials"
INFO Consuming Install Config from target directory
INFO Manifests created in: $HOME/clusterconfig/manifests and $HOME/clusterconfig/openshift
Optional: Confirm that the installation program created manifests in the clusterconfig/openshift
directory:
$ ls $HOME/clusterconfig/openshift/
99_kubeadmin-password-secret.yaml
99_openshift-cluster-api_master-machines-0.yaml
99_openshift-cluster-api_master-machines-1.yaml
99_openshift-cluster-api_master-machines-2.yaml
...
Create a Butane config that configures the additional partition. For example, name the file $HOME/clusterconfig/98-var-partition.bu
, change the disk device name to the name of the storage device on the worker
systems, and set the storage size as appropriate. This example places the /var
directory on a separate partition:
variant: openshift
version: 4.9.0
metadata:
labels:
machineconfiguration.openshift.io/role: worker
name: 98-var-partition
storage:
disks:
- device: /dev/<device_name> (1)
partitions:
- label: var
start_mib: <partition_start_offset> (2)
size_mib: <partition_size> (3)
filesystems:
- device: /dev/disk/by-partlabel/var
path: /var
format: xfs
mount_options: [defaults, prjquota] (4)
with_mount_unit: true
1 | The storage device name of the disk that you want to partition. |
2 | When adding a data partition to the boot disk, a minimum value of 25000 MiB (Mebibytes) is recommended. The root file system is automatically resized to fill all available space up to the specified offset. If no value is specified, or if the specified value is smaller than the recommended minimum, the resulting root file system will be too small, and future reinstalls of RHCOS might overwrite the beginning of the data partition. |
3 | The size of the data partition in mebibytes. |
4 | The prjquota mount option must be enabled for filesystems used for container storage. |
When creating a separate |
Create a manifest from the Butane config and save it to the clusterconfig/openshift
directory. For example, run the following command:
$ butane $HOME/clusterconfig/98-var-partition.bu -o $HOME/clusterconfig/openshift/98-var-partition.yaml
Run openshift-install
again to create Ignition configs from a set of files in the manifest
and openshift
subdirectories:
$ openshift-install create ignition-configs --dir $HOME/clusterconfig
$ ls $HOME/clusterconfig/
auth bootstrap.ign master.ign metadata.json worker.ign
Now you can use the Ignition config files as input to the installation procedures to install Red Hat Enterprise Linux CoreOS (RHCOS) systems.
You must create a Microsoft Azure resource group. This is used during the installation of your OpenShift Container Platform cluster on Azure Stack Hub.
Configure an Azure account.
Generate the Ignition config files for your cluster.
Create the resource group in a supported Azure region:
$ az group create --name ${RESOURCE_GROUP} --location ${AZURE_REGION}
The Azure client does not support deployments based on files existing locally; therefore, you must copy and store the RHCOS virtual hard disk (VHD) cluster image and bootstrap Ignition config file in a storage container so they are accessible during deployment.
Configure an Azure account.
Generate the Ignition config files for your cluster.
Create an Azure storage account to store the VHD cluster image:
$ az storage account create -g ${RESOURCE_GROUP} --location ${AZURE_REGION} --name ${CLUSTER_NAME}sa --kind Storage --sku Standard_LRS
The Azure storage account name must be between 3 and 24 characters in length and
use numbers and lower-case letters only. If your |
Export the storage account key as an environment variable:
$ export ACCOUNT_KEY=`az storage account keys list -g ${RESOURCE_GROUP} --account-name ${CLUSTER_NAME}sa --query "[0].value" -o tsv`
Choose the RHCOS version to use and export the URL of its VHD to an environment variable:
$ export COMPRESSED_VHD_URL=`curl -s https://raw.githubusercontent.com/openshift/installer/release-4.9/data/data/rhcos-amd64.json | jq -r '(.baseURI + .images.azurestack.path)'`
The RHCOS images might not change with every release of OpenShift Container Platform. You must specify an image with the highest version that is less than or equal to the OpenShift Container Platform version that you install. Use the image version that matches your OpenShift Container Platform version if it is available. |
Create the storage container for the VHD:
$ az storage container create --name vhd --account-name ${CLUSTER_NAME}sa --account-key ${ACCOUNT_KEY}
Download the compressed RHCOS VHD file locally:
$ curl -O -L ${COMPRESSED_VHD_URL}
Decompress the VHD file.
The decompressed VHD file is approximately 16 GB, so be sure that your host system has 16 GB of free space available. The VHD file can be deleted once you have uploaded it. |
Copy the chosen VHD to a blob:
$ az storage blob upload --account-name ${CLUSTER_NAME}sa --account-key ${ACCOUNT_KEY} -c vhd -n "rhcos.vhd" -f rhcos-<rhcos_version>-azurestack.x86_64.vhd
Create a blob storage container and upload the generated bootstrap.ign
file:
$ az storage container create --name files --account-name ${CLUSTER_NAME}sa --account-key ${ACCOUNT_KEY} --public-access blob
$ az storage blob upload --account-name ${CLUSTER_NAME}sa --account-key ${ACCOUNT_KEY} -c "files" -f "<installation_directory>/bootstrap.ign" -n "bootstrap.ign"
dns records are required for clusters that use user-provisioned infrastructure. You should choose the dns strategy that fits your scenario.
For this example, Azure Stack Hub’s datacenter dns integration is used, so you will create a dns zone.
The dns zone is not required to exist in the same resource group as the cluster deployment and might already exist in your organization for the desired base domain. If that is the case, you can skip creating the dns zone; be sure the installation config you generated earlier reflects that scenario. |
Configure an Azure account.
Generate the Ignition config files for your cluster.
Create the new dns zone in the resource group exported in the
BASE_DOMAIN_RESOURCE_GROUP
environment variable:
$ az network dns zone create -g ${BASE_DOMAIN_RESOURCE_GROUP} -n ${CLUSTER_NAME}.${BASE_DOMAIN}
You can skip this step if you are using a dns zone that already exists.
You can learn more about configuring a dns zone in Azure Stack Hub by visiting that section.
You must create a virtual network (VNet) in Microsoft Azure Stack Hub for your OpenShift Container Platform cluster to use. You can customize the VNet to meet your requirements. One way to create the VNet is to modify the provided Azure Resource Manager (ARM) template.
If you do not use the provided ARM template to create your Azure Stack Hub infrastructure, you must review the provided information and manually create the infrastructure. If your cluster does not initialize correctly, you might have to contact Red Hat support with your installation logs. |
Configure an Azure account.
Generate the Ignition config files for your cluster.
Copy the template from the ARM template for the VNet section of this topic
and save it as 01_vnet.json
in your cluster’s installation directory. This template describes the
VNet that your cluster requires.
Create the deployment by using the az
CLI:
$ az deployment group create -g ${RESOURCE_GROUP} \
--template-file "<installation_directory>/01_vnet.json" \
--parameters baseName="${INFRA_ID}"(1)
1 | The base name to be used in resource names; this is usually the cluster’s infrastructure ID. |
You can use the following Azure Resource Manager (ARM) template to deploy the VNet that you need for your OpenShift Container Platform cluster:
01_vnet.json
ARM template{
"$schema" : "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
"contentVersion" : "1.0.0.0",
"parameters" : {
"baseName" : {
"type" : "string",
"minLength" : 1,
"metadata" : {
"description" : "Base name to be used in resource names (usually the cluster's Infra ID)"
}
}
},
"variables" : {
"location" : "[resourceGroup().location]",
"virtualNetworkName" : "[concat(parameters('baseName'), '-vnet')]",
"addressPrefix" : "10.0.0.0/16",
"masterSubnetName" : "[concat(parameters('baseName'), '-master-subnet')]",
"masterSubnetPrefix" : "10.0.0.0/24",
"nodeSubnetName" : "[concat(parameters('baseName'), '-worker-subnet')]",
"nodeSubnetPrefix" : "10.0.1.0/24",
"clusterNsgName" : "[concat(parameters('baseName'), '-nsg')]"
},
"resources" : [
{
"apiVersion" : "2017-10-01",
"type" : "Microsoft.Network/virtualNetworks",
"name" : "[variables('virtualNetworkName')]",
"location" : "[variables('location')]",
"dependsOn" : [
"[concat('Microsoft.Network/networkSecurityGroups/', variables('clusterNsgName'))]"
],
"properties" : {
"addressSpace" : {
"addressPrefixes" : [
"[variables('addressPrefix')]"
]
},
"subnets" : [
{
"name" : "[variables('masterSubnetName')]",
"properties" : {
"addressPrefix" : "[variables('masterSubnetPrefix')]",
"serviceEndpoints": [],
"networkSecurityGroup" : {
"id" : "[resourceId('Microsoft.Network/networkSecurityGroups', variables('clusterNsgName'))]"
}
}
},
{
"name" : "[variables('nodeSubnetName')]",
"properties" : {
"addressPrefix" : "[variables('nodeSubnetPrefix')]",
"serviceEndpoints": [],
"networkSecurityGroup" : {
"id" : "[resourceId('Microsoft.Network/networkSecurityGroups', variables('clusterNsgName'))]"
}
}
}
]
}
},
{
"type" : "Microsoft.Network/networkSecurityGroups",
"name" : "[variables('clusterNsgName')]",
"apiVersion" : "2017-10-01",
"location" : "[variables('location')]",
"properties" : {
"securityRules" : [
{
"name" : "apiserver_in",
"properties" : {
"protocol" : "Tcp",
"sourcePortRange" : "*",
"destinationPortRange" : "6443",
"sourceAddressPrefix" : "*",
"destinationAddressPrefix" : "*",
"access" : "Allow",
"priority" : 101,
"direction" : "Inbound"
}
},
{
"name" : "ign_in",
"properties" : {
"protocol" : "*",
"sourcePortRange" : "*",
"destinationPortRange" : "22623",
"sourceAddressPrefix" : "*",
"destinationAddressPrefix" : "*",
"access" : "Allow",
"priority" : 102,
"direction" : "Inbound"
}
}
]
}
}
]
}
You must use a valid Red Hat Enterprise Linux CoreOS (RHCOS) image for Microsoft Azure Stack Hub for your OpenShift Container Platform nodes.
Configure an Azure account.
Generate the Ignition config files for your cluster.
Store the RHCOS virtual hard disk (VHD) cluster image in an Azure storage container.
Store the bootstrap Ignition config file in an Azure storage container.
Copy the template from the ARM template for image storage section of
this topic and save it as 02_storage.json
in your cluster’s installation directory. This template
describes the image storage that your cluster requires.
Export the RHCOS VHD blob URL as a variable:
$ export VHD_BLOB_URL=`az storage blob url --account-name ${CLUSTER_NAME}sa --account-key ${ACCOUNT_KEY} -c vhd -n "rhcos.vhd" -o tsv`
Deploy the cluster image:
$ az deployment group create -g ${RESOURCE_GROUP} \
--template-file "<installation_directory>/02_storage.json" \
--parameters vhdBlobURL="${VHD_BLOB_URL}" \ (1)
--parameters baseName="${INFRA_ID}"(2)
1 | The blob URL of the RHCOS VHD to be used to create master and worker machines. |
2 | The base name to be used in resource names; this is usually the cluster’s infrastructure ID. |
You can use the following Azure Resource Manager (ARM) template to deploy the stored Red Hat Enterprise Linux CoreOS (RHCOS) image that you need for your OpenShift Container Platform cluster:
02_storage.json
ARM template{
"$schema" : "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
"contentVersion" : "1.0.0.0",
"parameters" : {
"baseName" : {
"type" : "string",
"minLength" : 1,
"metadata" : {
"description" : "Base name to be used in resource names (usually the cluster's Infra ID)"
}
},
"vhdBlobURL" : {
"type" : "string",
"metadata" : {
"description" : "URL pointing to the blob where the VHD to be used to create master and worker machines is located"
}
}
},
"variables" : {
"location" : "[resourceGroup().location]",
"imageName" : "[concat(parameters('baseName'), '-image')]"
},
"resources" : [
{
"apiVersion" : "2017-12-01",
"type": "Microsoft.Compute/images",
"name": "[variables('imageName')]",
"location" : "[variables('location')]",
"properties": {
"storageProfile": {
"osDisk": {
"osType": "Linux",
"osState": "Generalized",
"blobUri": "[parameters('vhdBlobURL')]",
"storageAccountType": "Standard_LRS"
}
}
}
}
]
}
All the Red Hat Enterprise Linux CoreOS (RHCOS) machines require networking to be configured in initramfs
during boot
to fetch their Ignition config files.
You must configure the network connectivity between machines to allow OpenShift Container Platform cluster components to communicate. Each machine must be able to resolve the hostnames of all other machines in the cluster.
This section provides details about the ports that are required.
In connected OpenShift Container Platform environments, all nodes are required to have internet access to pull images for platform containers and provide telemetry data to Red Hat. |
Protocol | Port | Description |
---|---|---|
ICMP |
N/A |
Network reachability tests |
TCP |
|
Metrics |
|
Host level services, including the node exporter on ports |
|
|
The default ports that Kubernetes reserves |
|
|
openshift-sdn |
|
UDP |
|
VXLAN and Geneve |
|
VXLAN and Geneve |
|
|
Host level services, including the node exporter on ports |
|
|
IPsec IKE packets |
|
|
IPsec NAT-T packets |
|
TCP/UDP |
|
Kubernetes node port |
ESP |
N/A |
IPsec Encapsulating Security Payload (ESP) |
Protocol | Port | Description |
---|---|---|
TCP |
|
Kubernetes API |
Protocol | Port | Description |
---|---|---|
TCP |
|
etcd server and peer ports |
You must configure networking and load balancing in Microsoft Azure Stack Hub for your OpenShift Container Platform cluster to use. One way to create these components is to modify the provided Azure Resource Manager (ARM) template.
Load balancing requires the following dns records:
An api
dns record for the API public load balancer in the dns zone.
An api-int
dns record for the API internal load balancer in the dns zone.
If you do not use the provided ARM template to create your Azure Stack Hub infrastructure, you must review the provided information and manually create the infrastructure. If your cluster does not initialize correctly, you might have to contact Red Hat support with your installation logs. |
Configure an Azure account.
Generate the Ignition config files for your cluster.
Create and configure a VNet and associated subnets in Azure Stack Hub.
Copy the template from the ARM template for the network and load balancers
section of this topic and save it as 03_infra.json
in your cluster’s installation directory. This
template describes the networking and load balancing objects that your cluster
requires.
Create the deployment by using the az
CLI:
$ az deployment group create -g ${RESOURCE_GROUP} \
--template-file "<installation_directory>/03_infra.json" \
--parameters baseName="${INFRA_ID}"(1)
1 | The base name to be used in resource names; this is usually the cluster’s infrastructure ID. |
Create an api
dns record and an api-int
dns record. When creating the API dns records, the ${BASE_DOMAIN_RESOURCE_GROUP}
variable must point to the resource group where the dns zone exists.
Export the following variable:
$ export PUBLIC_IP=`az network public-ip list -g ${RESOURCE_GROUP} --query "[?name=='${INFRA_ID}-master-pip'] | [0].ipAddress" -o tsv`
Export the following variable:
$ export PRIVATE_IP=`az network lb frontend-ip show -g "$RESOURCE_GROUP" --lb-name "${INFRA_ID}-internal" -n internal-lb-ip --query "privateIpAddress" -o tsv`
Create the api
dns record in a new dns zone:
$ az network dns record-set a add-record -g ${BASE_DOMAIN_RESOURCE_GROUP} -z ${CLUSTER_NAME}.${BASE_DOMAIN} -n api -a ${PUBLIC_IP} --ttl 60
If you are adding the cluster to an existing dns zone, you can create the api
dns record in it instead:
$ az network dns record-set a add-record -g ${BASE_DOMAIN_RESOURCE_GROUP} -z ${BASE_DOMAIN} -n api.${CLUSTER_NAME} -a ${PUBLIC_IP} --ttl 60
Create the api-int
dns record in a new dns zone:
$ az network dns record-set a add-record -g ${BASE_DOMAIN_RESOURCE_GROUP} -z "${CLUSTER_NAME}.${BASE_DOMAIN}" -n api-int -a ${PRIVATE_IP} --ttl 60
If you are adding the cluster to an existing dns zone, you can create the api-int
dns
record in it instead:
$ az network dns record-set a add-record -g ${BASE_DOMAIN_RESOURCE_GROUP} -z ${BASE_DOMAIN} -n api-int.${CLUSTER_NAME} -a ${PRIVATE_IP} --ttl 60
You can use the following Azure Resource Manager (ARM) template to deploy the networking objects and load balancers that you need for your OpenShift Container Platform cluster:
03_infra.json
ARM template{
"$schema" : "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
"contentVersion" : "1.0.0.0",
"parameters" : {
"baseName" : {
"type" : "string",
"minLength" : 1,
"metadata" : {
"description" : "Base name to be used in resource names (usually the cluster's Infra ID)"
}
}
},
"variables" : {
"location" : "[resourceGroup().location]",
"virtualNetworkName" : "[concat(parameters('baseName'), '-vnet')]",
"virtualNetworkID" : "[resourceId('Microsoft.Network/virtualNetworks', variables('virtualNetworkName'))]",
"masterSubnetName" : "[concat(parameters('baseName'), '-master-subnet')]",
"masterSubnetRef" : "[concat(variables('virtualNetworkID'), '/subnets/', variables('masterSubnetName'))]",
"masterPublicIpAddressName" : "[concat(parameters('baseName'), '-master-pip')]",
"masterPublicIpAddressID" : "[resourceId('Microsoft.Network/publicIPAddresses', variables('masterPublicIpAddressName'))]",
"masterLoadBalancerName" : "[concat(parameters('baseName'))]",
"masterLoadBalancerID" : "[resourceId('Microsoft.Network/loadBalancers', variables('masterLoadBalancerName'))]",
"masterAvailabilitySetName" : "[concat(parameters('baseName'), '-avset')]",
"internalLoadBalancerName" : "[concat(parameters('baseName'), '-internal')]",
"internalLoadBalancerID" : "[resourceId('Microsoft.Network/loadBalancers', variables('internalLoadBalancerName'))]",
"skuName": "Basic"
},
"resources" : [
{
"apiVersion": "2017-03-30",
"type" : "Microsoft.Compute/availabilitySets",
"name" : "[variables('masterAvailabilitySetName')]",
"location" : "[variables('location')]",
"properties": {
"platformFaultDomainCount": "2",
"platformUpdateDomainCount": "5"
},
"sku": {
"name": "Aligned"
}
},
{
"apiVersion" : "2017-10-01",
"type" : "Microsoft.Network/publicIPAddresses",
"name" : "[variables('masterPublicIpAddressName')]",
"location" : "[variables('location')]",
"sku": {
"name": "[variables('skuName')]"
},
"properties" : {
"publicIPAllocationMethod" : "Static",
"dnsSettings" : {
"domainNameLabel" : "[variables('masterPublicIpAddressName')]"
}
}
},
{
"apiVersion" : "2017-10-01",
"type" : "Microsoft.Network/loadBalancers",
"name" : "[variables('masterLoadBalancerName')]",
"location" : "[variables('location')]",
"sku": {
"name": "[variables('skuName')]"
},
"dependsOn" : [
"[concat('Microsoft.Network/publicIPAddresses/', variables('masterPublicIpAddressName'))]"
],
"properties" : {
"frontendIPConfigurations" : [
{
"name" : "public-lb-ip",
"properties" : {
"publicIPAddress" : {
"id" : "[variables('masterPublicIpAddressID')]"
}
}
}
],
"backendAddressPools" : [
{
"name" : "[variables('masterLoadBalancerName')]"
}
],
"loadBalancingRules" : [
{
"name" : "api-public",
"properties" : {
"frontendIPConfiguration" : {
"id" :"[concat(variables('masterLoadBalancerID'), '/frontendIPConfigurations/public-lb-ip')]"
},
"backendAddressPool" : {
"id" : "[concat(variables('masterLoadBalancerID'), '/backendAddressPools/', variables('masterLoadBalancerName'))]"
},
"protocol" : "Tcp",
"loadDistribution" : "Default",
"idleTimeoutInMinutes" : 30,
"frontendPort" : 6443,
"backendPort" : 6443,
"probe" : {
"id" : "[concat(variables('masterLoadBalancerID'), '/probes/api-public-probe')]"
}
}
}
],
"probes" : [
{
"name" : "api-public-probe",
"properties" : {
"protocol" : "Tcp",
"port" : 6443,
"intervalInSeconds" : 10,
"numberOfProbes" : 3
}
}
]
}
},
{
"apiVersion" : "2017-10-01",
"type" : "Microsoft.Network/loadBalancers",
"name" : "[variables('internalLoadBalancerName')]",
"location" : "[variables('location')]",
"sku": {
"name": "[variables('skuName')]"
},
"properties" : {
"frontendIPConfigurations" : [
{
"name" : "internal-lb-ip",
"properties" : {
"privateIPAllocationMethod" : "Dynamic",
"subnet" : {
"id" : "[variables('masterSubnetRef')]"
},
"privateIPAddressVersion" : "IPv4"
}
}
],
"backendAddressPools" : [
{
"name" : "[variables('internalLoadBalancerName')]"
}
],
"loadBalancingRules" : [
{
"name" : "api-internal",
"properties" : {
"frontendIPConfiguration" : {
"id" : "[concat(variables('internalLoadBalancerID'), '/frontendIPConfigurations/internal-lb-ip')]"
},
"frontendPort" : 6443,
"backendPort" : 6443,
"enableFloatingIP" : false,
"idleTimeoutInMinutes" : 30,
"protocol" : "Tcp",
"enableTcpReset" : false,
"loadDistribution" : "Default",
"backendAddressPool" : {
"id" : "[concat(variables('internalLoadBalancerID'), '/backendAddressPools/', variables('internalLoadBalancerName'))]"
},
"probe" : {
"id" : "[concat(variables('internalLoadBalancerID'), '/probes/api-internal-probe')]"
}
}
},
{
"name" : "sint",
"properties" : {
"frontendIPConfiguration" : {
"id" : "[concat(variables('internalLoadBalancerID'), '/frontendIPConfigurations/internal-lb-ip')]"
},
"frontendPort" : 22623,
"backendPort" : 22623,
"enableFloatingIP" : false,
"idleTimeoutInMinutes" : 30,
"protocol" : "Tcp",
"enableTcpReset" : false,
"loadDistribution" : "Default",
"backendAddressPool" : {
"id" : "[concat(variables('internalLoadBalancerID'), '/backendAddressPools/', variables('internalLoadBalancerName'))]"
},
"probe" : {
"id" : "[concat(variables('internalLoadBalancerID'), '/probes/sint-probe')]"
}
}
}
],
"probes" : [
{
"name" : "api-internal-probe",
"properties" : {
"protocol" : "Tcp",
"port" : 6443,
"intervalInSeconds" : 10,
"numberOfProbes" : 3
}
},
{
"name" : "sint-probe",
"properties" : {
"protocol" : "Tcp",
"port" : 22623,
"intervalInSeconds" : 10,
"numberOfProbes" : 3
}
}
]
}
}
]
}
You must create the bootstrap machine in Microsoft Azure Stack Hub to use during OpenShift Container Platform cluster initialization. One way to create this machine is to modify the provided Azure Resource Manager (ARM) template.
If you do not use the provided ARM template to create your bootstrap machine, you must review the provided information and manually create the infrastructure. If your cluster does not initialize correctly, you might have to contact Red Hat support with your installation logs. |
Configure an Azure account.
Generate the Ignition config files for your cluster.
Create and configure a VNet and associated subnets in Azure Stack Hub.
Create and configure networking and load balancers in Azure Stack Hub.
Create control plane and compute roles.
Copy the template from the ARM template for the bootstrap machine section of
this topic and save it as 04_bootstrap.json
in your cluster’s installation directory. This template
describes the bootstrap machine that your cluster requires.
Export the bootstrap URL variable:
$ export BOOTSTRAP_URL=`az storage blob url --account-name ${CLUSTER_NAME}sa --account-key ${ACCOUNT_KEY} -c "files" -n "bootstrap.ign" -o tsv`
Export the bootstrap ignition variable:
If your environment uses a public certificate authority (CA), run this command:
$ export BOOTSTRAP_IGNITION=`jq -rcnM --arg v "3.2.0" --arg url ${BOOTSTRAP_URL} '{ignition:{version:$v,config:{replace:{source:$url}}}}' | base64 | tr -d '\n'`
If your environment uses an internal CA, you must add your PEM encoded bundle to the bootstrap ignition stub so that your bootstrap virtual machine can pull the bootstrap ignition from the storage account. Run the following commands, which assume your CA is in a file called CA.pem
:
$ export CA="data:text/plain;charset=utf-8;base64,$(cat CA.pem |base64 |tr -d '\n')"
$ export BOOTSTRAP_IGNITION=`jq -rcnM --arg v "3.2.0" --arg url "$BOOTSTRAP_URL" --arg cert "$CA" '{ignition:{version:$v,security:{tls:{certificateAuthorities:[{source:$cert}]}},config:{replace:{source:$url}}}}' | base64 | tr -d '\n'`
Create the deployment by using the az
CLI:
$ az deployment group create --verbose -g ${RESOURCE_GROUP} \
--template-file "<installation_directory>/04_bootstrap.json" \
--parameters bootstrapIgnition="${BOOTSTRAP_IGNITION}" \ (1)
--parameters sshKeyData="${SSH_KEY}" \ (2)
--parameters baseName="${INFRA_ID}" \ (3)
--parameters diagnosticsStorageAccountName="${CLUSTER_NAME}sa" (4)
1 | The bootstrap Ignition content for the bootstrap cluster. |
2 | The SSH RSA public key file as a string. |
3 | The base name to be used in resource names; this is usually the cluster’s infrastructure ID. |
4 | The name of the storage account for your cluster. |
You can use the following Azure Resource Manager (ARM) template to deploy the bootstrap machine that you need for your OpenShift Container Platform cluster:
04_bootstrap.json
ARM template{
"$schema" : "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
"contentVersion" : "1.0.0.0",
"parameters" : {
"baseName" : {
"type" : "string",
"minLength" : 1,
"metadata" : {
"description" : "Base name to be used in resource names (usually the cluster's Infra ID)"
}
},
"bootstrapIgnition" : {
"type" : "string",
"minLength" : 1,
"metadata" : {
"description" : "Bootstrap ignition content for the bootstrap cluster"
}
},
"sshKeyData" : {
"type" : "securestring",
"metadata" : {
"description" : "SSH RSA public key file as a string."
}
},
"diagnosticsStorageAccountName": {
"type": "string"
},
"bootstrapVMSize" : {
"type" : "string",
"defaultValue" : "Standard_DS4_v2",
"metadata" : {
"description" : "The size of the Bootstrap Virtual Machine"
}
}
},
"variables" : {
"location" : "[resourceGroup().location]",
"virtualNetworkName" : "[concat(parameters('baseName'), '-vnet')]",
"virtualNetworkID" : "[resourceId('Microsoft.Network/virtualNetworks', variables('virtualNetworkName'))]",
"masterSubnetName" : "[concat(parameters('baseName'), '-master-subnet')]",
"masterSubnetRef" : "[concat(variables('virtualNetworkID'), '/subnets/', variables('masterSubnetName'))]",
"masterLoadBalancerName" : "[concat(parameters('baseName'))]",
"masterAvailabilitySetName" : "[concat(parameters('baseName'), '-avset')]",
"internalLoadBalancerName" : "[concat(parameters('baseName'), '-internal')]",
"sshKeyPath" : "/home/core/.ssh/authorized_keys",
"vmName" : "[concat(parameters('baseName'), '-bootstrap')]",
"nicName" : "[concat(variables('vmName'), '-nic')]",
"imageName" : "[concat(parameters('baseName'), '-image')]",
"clusterNsgName" : "[concat(parameters('baseName'), '-nsg')]",
"sshPublicIpAddressName" : "[concat(variables('vmName'), '-ssh-pip')]"
},
"resources" : [
{
"apiVersion" : "2017-10-01",
"type" : "Microsoft.Network/publicIPAddresses",
"name" : "[variables('sshPublicIpAddressName')]",
"location" : "[variables('location')]",
"sku": {
"name": "Basic"
},
"properties" : {
"publicIPAllocationMethod" : "Static",
"dnsSettings" : {
"domainNameLabel" : "[variables('sshPublicIpAddressName')]"
}
}
},
{
"apiVersion" : "2017-10-01",
"type" : "Microsoft.Network/networkInterfaces",
"name" : "[variables('nicName')]",
"location" : "[variables('location')]",
"dependsOn" : [
"[resourceId('Microsoft.Network/publicIPAddresses', variables('sshPublicIpAddressName'))]"
],
"properties" : {
"securityRules": [
{
"properties": {
"description": "ssh-in-nic",
"protocol": "Tcp",
"sourcePortRange": "*",
"destinationPortRange": "22"
}}],
"ipConfigurations" : [
{
"name" : "pipConfig",
"properties" : {
"privateIPAllocationMethod" : "Dynamic",
"publicIPAddress": {
"id": "[resourceId('Microsoft.Network/publicIPAddresses', variables('sshPublicIpAddressName'))]"
},
"subnet" : {
"id" : "[variables('masterSubnetRef')]"
},
"loadBalancerBackendAddressPools" : [
{
"id" : "[concat('/subscriptions/', subscription().subscriptionId, '/resourceGroups/', resourceGroup().name, '/providers/Microsoft.Network/loadBalancers/', variables('masterLoadBalancerName'), '/backendAddressPools/', variables('masterLoadBalancerName'))]"
},
{
"id" : "[concat('/subscriptions/', subscription().subscriptionId, '/resourceGroups/', resourceGroup().name, '/providers/Microsoft.Network/loadBalancers/', variables('internalLoadBalancerName'), '/backendAddressPools/', variables('internalLoadBalancerName'))]"
}
]
}
}
]
}
},
{
"name": "[parameters('diagnosticsStorageAccountName')]",
"type": "Microsoft.Storage/storageAccounts",
"apiVersion": "2017-10-01",
"location": "[variables('location')]",
"properties": {},
"kind": "Storage",
"sku": {
"name": "Standard_LRS"
}
},
{
"apiVersion" : "2017-12-01",
"type" : "Microsoft.Compute/virtualMachines",
"name" : "[variables('vmName')]",
"location" : "[variables('location')]",
"dependsOn" : [
"[concat('Microsoft.Network/networkInterfaces/', variables('nicName'))]",
"[concat('Microsoft.Storage/storageAccounts/', parameters('diagnosticsStorageAccountName'))]"
],
"properties" : {
"availabilitySet": {
"id": "[resourceId('Microsoft.Compute/availabilitySets',variables('masterAvailabilitySetName'))]"
},
"hardwareProfile" : {
"vmSize" : "[parameters('bootstrapVMSize')]"
},
"osProfile" : {
"computerName" : "[variables('vmName')]",
"adminUsername" : "core",
"customData" : "[parameters('bootstrapIgnition')]",
"linuxConfiguration" : {
"disablePasswordAuthentication" : true,
"ssh" : {
"publicKeys" : [
{
"path" : "[variables('sshKeyPath')]",
"keyData" : "[parameters('sshKeyData')]"
}
]
}
}
},
"storageProfile" : {
"imageReference": {
"id": "[resourceId('Microsoft.Compute/images', variables('imageName'))]"
},
"osDisk" : {
"name": "[concat(variables('vmName'),'_OSDisk')]",
"osType" : "Linux",
"createOption" : "FromImage",
"managedDisk": {
"storageAccountType": "Standard_LRS"
},
"diskSizeGB" : 100
}
},
"networkProfile" : {
"networkInterfaces" : [
{
"id" : "[resourceId('Microsoft.Network/networkInterfaces', variables('nicName'))]"
}
]
},
"diagnosticsProfile": {
"bootDiagnostics": {
"enabled": true,
"storageUri": "[reference(resourceId('Microsoft.Storage/storageAccounts', parameters('diagnosticsStorageAccountName'))).primaryEndpoints.blob]"
}
}
}
},
{
"apiVersion" : "2017-10-01",
"type": "Microsoft.Network/networkSecurityGroups/securityRules",
"name" : "[concat(variables('clusterNsgName'), '/bootstrap_ssh_in')]",
"location" : "[variables('location')]",
"dependsOn" : [
"[resourceId('Microsoft.Compute/virtualMachines', variables('vmName'))]"
],
"properties": {
"protocol" : "Tcp",
"sourcePortRange" : "*",
"destinationPortRange" : "22",
"sourceAddressPrefix" : "*",
"destinationAddressPrefix" : "*",
"access" : "Allow",
"priority" : 100,
"direction" : "Inbound"
}
}
]
}
You must create the control plane machines in Microsoft Azure Stack Hub for your cluster to use. One way to create these machines is to modify the provided Azure Resource Manager (ARM) template.
If you do not use the provided ARM template to create your control plane machines, you must review the provided information and manually create the infrastructure. If your cluster does not initialize correctly, you might have to contact Red Hat support with your installation logs. |
Configure an Azure account.
Generate the Ignition config files for your cluster.
Create and configure a VNet and associated subnets in Azure Stack Hub.
Create and configure networking and load balancers in Azure Stack Hub.
Create control plane and compute roles.
Create the bootstrap machine.
Copy the template from the ARM template for control plane machines
section of this topic and save it as 05_masters.json
in your cluster’s installation directory.
This template describes the control plane machines that your cluster requires.
Export the following variable needed by the control plane machine deployment:
$ export MASTER_IGNITION=`cat <installation_directory>/master.ign | base64 | tr -d '\n'`
Create the deployment by using the az
CLI:
$ az deployment group create -g ${RESOURCE_GROUP} \
--template-file "<installation_directory>/05_masters.json" \
--parameters masterIgnition="${MASTER_IGNITION}" \ (1)
--parameters sshKeyData="${SSH_KEY}" \ (2)
--parameters baseName="${INFRA_ID}" \ (3)
--parameters diagnosticsStorageAccountName="${CLUSTER_NAME}sa" (4)
1 | The Ignition content for the control plane nodes (also known as the master nodes). |
2 | The SSH RSA public key file as a string. |
3 | The base name to be used in resource names; this is usually the cluster’s infrastructure ID. |
4 | The name of the storage account for your cluster. |
You can use the following Azure Resource Manager (ARM) template to deploy the control plane machines that you need for your OpenShift Container Platform cluster:
05_masters.json
ARM template{
"$schema" : "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
"contentVersion" : "1.0.0.0",
"parameters" : {
"baseName" : {
"type" : "string",
"minLength" : 1,
"metadata" : {
"description" : "Base name to be used in resource names (usually the cluster's Infra ID)"
}
},
"masterIgnition" : {
"type" : "string",
"metadata" : {
"description" : "Ignition content for the master nodes"
}
},
"sshKeyData" : {
"type" : "securestring",
"metadata" : {
"description" : "SSH RSA public key file as a string"
}
},
"diagnosticsStorageAccountName": {
"type": "string"
},
"masterVMSize" : {
"type" : "string",
"defaultValue" : "Standard_DS4_v2",
"metadata" : {
"description" : "The size of the Master Virtual Machines"
}
},
"diskSizeGB" : {
"type" : "int",
"defaultValue" : 1023,
"metadata" : {
"description" : "Size of the Master VM OS disk, in GB"
}
}
},
"variables" : {
"location" : "[resourceGroup().location]",
"virtualNetworkName" : "[concat(parameters('baseName'), '-vnet')]",
"virtualNetworkID" : "[resourceId('Microsoft.Network/virtualNetworks', variables('virtualNetworkName'))]",
"masterSubnetName" : "[concat(parameters('baseName'), '-master-subnet')]",
"masterSubnetRef" : "[concat(variables('virtualNetworkID'), '/subnets/', variables('masterSubnetName'))]",
"masterLoadBalancerName" : "[concat(parameters('baseName'))]",
"masterAvailabilitySetName" : "[concat(parameters('baseName'), '-avset')]",
"internalLoadBalancerName" : "[concat(parameters('baseName'), '-internal')]",
"sshKeyPath" : "/home/core/.ssh/authorized_keys",
"clusterNsgName" : "[concat(parameters('baseName'), '-nsg')]",
"imageName" : "[concat(parameters('baseName'), '-image')]",
"numberOfMasters" : 3,
"vms" : {
"copy" : [
{
"name" : "vmNames",
"count" : "[variables('numberOfMasters')]",
"input" : {
"name" : "[concat(parameters('baseName'), string('-master-'), string(copyIndex('vmNames')))]"
}
}
]
}
},
"resources" : [
{
"name": "[parameters('diagnosticsStorageAccountName')]",
"type": "Microsoft.Storage/storageAccounts",
"apiVersion": "2017-10-01",
"location": "[variables('location')]",
"properties": {},
"kind": "Storage",
"sku": {
"name": "Standard_LRS"
}
},
{
"apiVersion" : "2017-10-01",
"type" : "Microsoft.Network/networkInterfaces",
"location": "[variables('location')]",
"copy" : {
"name" : "nicCopy",
"count" : "[variables('numberOfMasters')]"
},
"name" : "[concat(variables('vms').vmNames[copyIndex()].name, '-nic')]",
"properties" : {
"ipConfigurations" : [
{
"name" : "pipConfig",
"properties" : {
"privateIPAllocationMethod" : "Dynamic",
"subnet" : {
"id" : "[variables('masterSubnetRef')]"
},
"loadBalancerBackendAddressPools" : [
{
"id" : "[concat('/subscriptions/', subscription().subscriptionId, '/resourceGroups/', resourceGroup().name, '/providers/Microsoft.Network/loadBalancers/', variables('masterLoadBalancerName'), '/backendAddressPools/', variables('masterLoadBalancerName'))]"
},
{
"id" : "[concat('/subscriptions/', subscription().subscriptionId, '/resourceGroups/', resourceGroup().name, '/providers/Microsoft.Network/loadBalancers/', variables('internalLoadBalancerName'), '/backendAddressPools/', variables('internalLoadBalancerName'))]"
}
]
}
}
]
}
},
{
"apiVersion" : "2017-12-01",
"type" : "Microsoft.Compute/virtualMachines",
"location" : "[variables('location')]",
"copy" : {
"name" : "vmCopy",
"count" : "[variables('numberOfMasters')]"
},
"name" : "[variables('vms').vmNames[copyIndex()].name]",
"dependsOn" : [
"[concat('Microsoft.Network/networkInterfaces/', concat(variables('vms').vmNames[copyIndex()].name, '-nic'))]",
"[concat('Microsoft.Storage/storageAccounts/', parameters('diagnosticsStorageAccountName'))]"
],
"properties" : {
"availabilitySet": {
"id": "[resourceId('Microsoft.Compute/availabilitySets',variables('masterAvailabilitySetName'))]"
},
"hardwareProfile" : {
"vmSize" : "[parameters('masterVMSize')]"
},
"osProfile" : {
"computerName" : "[variables('vms').vmNames[copyIndex()].name]",
"adminUsername" : "core",
"customData" : "[parameters('masterIgnition')]",
"linuxConfiguration" : {
"disablePasswordAuthentication" : true,
"ssh" : {
"publicKeys" : [
{
"path" : "[variables('sshKeyPath')]",
"keyData" : "[parameters('sshKeyData')]"
}
]
}
}
},
"storageProfile" : {
"imageReference": {
"id": "[resourceId('Microsoft.Compute/images', variables('imageName'))]"
},
"osDisk" : {
"name": "[concat(variables('vms').vmNames[copyIndex()].name, '_OSDisk')]",
"osType" : "Linux",
"createOption" : "FromImage",
"writeAcceleratorEnabled": false,
"managedDisk": {
"storageAccountType": "Standard_LRS"
},
"diskSizeGB" : "[parameters('diskSizeGB')]"
}
},
"networkProfile" : {
"networkInterfaces" : [
{
"id" : "[resourceId('Microsoft.Network/networkInterfaces', concat(variables('vms').vmNames[copyIndex()].name, '-nic'))]",
"properties": {
"primary": false
}
}
]
},
"diagnosticsProfile": {
"bootDiagnostics": {
"enabled": true,
"storageUri": "[reference(resourceId('Microsoft.Storage/storageAccounts', parameters('diagnosticsStorageAccountName'))).primaryEndpoints.blob]"
}
}
}
}
]
}
After you create all of the required infrastructure in Microsoft Azure Stack Hub, wait for the bootstrap process to complete on the machines that you provisioned by using the Ignition config files that you generated with the installation program.
Configure an Azure account.
Generate the Ignition config files for your cluster.
Create and configure a VNet and associated subnets in Azure Stack Hub.
Create and configure networking and load balancers in Azure Stack Hub.
Create control plane and compute roles.
Create the bootstrap machine.
Create the control plane machines.
Change to the directory that contains the installation program and run the following command:
$ ./openshift-install wait-for bootstrap-complete --dir <installation_directory> \ (1)
--log-level info (2)
1 | For <installation_directory> , specify the path to the directory that you
stored the installation files in. |
2 | To view different installation details, specify warn , debug , or
error instead of info . |
If the command exits without a FATAL
warning, your production control plane
has initialized.
Delete the bootstrap resources:
$ az network nsg rule delete -g ${RESOURCE_GROUP} --nsg-name ${INFRA_ID}-nsg --name bootstrap_ssh_in
$ az vm stop -g ${RESOURCE_GROUP} --name ${INFRA_ID}-bootstrap
$ az vm deallocate -g ${RESOURCE_GROUP} --name ${INFRA_ID}-bootstrap
$ az vm delete -g ${RESOURCE_GROUP} --name ${INFRA_ID}-bootstrap --yes
$ az disk delete -g ${RESOURCE_GROUP} --name ${INFRA_ID}-bootstrap_OSDisk --no-wait --yes
$ az network nic delete -g ${RESOURCE_GROUP} --name ${INFRA_ID}-bootstrap-nic --no-wait
$ az storage blob delete --account-key ${ACCOUNT_KEY} --account-name ${CLUSTER_NAME}sa --container-name files --name bootstrap.ign
$ az network public-ip delete -g ${RESOURCE_GROUP} --name ${INFRA_ID}-bootstrap-ssh-pip
If you do not delete the bootstrap server, installation may not succeed due to API traffic being routed to the bootstrap server. |
You can create worker machines in Microsoft Azure Stack Hub for your cluster to use by launching individual instances discretely or by automated processes outside the cluster, such as auto scaling groups. You can also take advantage of the built-in cluster scaling mechanisms and the machine API in OpenShift Container Platform.
In this example, you manually launch one instance by using the Azure Resource
Manager (ARM) template. Additional instances can be launched by including
additional resources of type 06_workers.json
in the file.
If you do not use the provided ARM template to create your worker machines, you must review the provided information and manually create the infrastructure. If your cluster does not initialize correctly, you might have to contact Red Hat support with your installation logs. |
Configure an Azure account.
Generate the Ignition config files for your cluster.
Create and configure a VNet and associated subnets in Azure Stack Hub.
Create and configure networking and load balancers in Azure Stack Hub.
Create control plane and compute roles.
Create the bootstrap machine.
Create the control plane machines.
Copy the template from the ARM template for worker machines
section of this topic and save it as 06_workers.json
in your cluster’s installation directory. This
template describes the worker machines that your cluster requires.
Export the following variable needed by the worker machine deployment:
$ export WORKER_IGNITION=`cat <installation_directory>/worker.ign | base64 | tr -d '\n'`
Create the deployment by using the az
CLI:
$ az deployment group create -g ${RESOURCE_GROUP} \
--template-file "<installation_directory>/06_workers.json" \
--parameters workerIgnition="${WORKER_IGNITION}" \ (1)
--parameters sshKeyData="${SSH_KEY}" \ (2)
--parameters baseName="${INFRA_ID}" (3)
--parameters diagnosticsStorageAccountName="${CLUSTER_NAME}sa" (4)
1 | The Ignition content for the worker nodes. |
2 | The SSH RSA public key file as a string. |
3 | The base name to be used in resource names; this is usually the cluster’s infrastructure ID. |
4 | The name of the storage account for your cluster. |
You can use the following Azure Resource Manager (ARM) template to deploy the worker machines that you need for your OpenShift Container Platform cluster:
06_workers.json
ARM template{
"$schema" : "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
"contentVersion" : "1.0.0.0",
"parameters" : {
"baseName" : {
"type" : "string",
"minLength" : 1,
"metadata" : {
"description" : "Base name to be used in resource names (usually the cluster's Infra ID)"
}
},
"workerIgnition" : {
"type" : "string",
"metadata" : {
"description" : "Ignition content for the worker nodes"
}
},
"numberOfNodes" : {
"type" : "int",
"defaultValue" : 3,
"minValue" : 2,
"maxValue" : 30,
"metadata" : {
"description" : "Number of OpenShift compute nodes to deploy"
}
},
"sshKeyData" : {
"type" : "securestring",
"metadata" : {
"description" : "SSH RSA public key file as a string"
}
},
"diagnosticsStorageAccountName": {
"type": "string"
},
"nodeVMSize" : {
"type" : "string",
"defaultValue" : "Standard_DS4_v2",
"metadata" : {
"description" : "The size of the each Node Virtual Machine"
}
}
},
"variables" : {
"location" : "[resourceGroup().location]",
"virtualNetworkName" : "[concat(parameters('baseName'), '-vnet')]",
"virtualNetworkID" : "[resourceId('Microsoft.Network/virtualNetworks', variables('virtualNetworkName'))]",
"nodeSubnetName" : "[concat(parameters('baseName'), '-worker-subnet')]",
"nodeSubnetRef" : "[concat(variables('virtualNetworkID'), '/subnets/', variables('nodeSubnetName'))]",
"infraLoadBalancerName" : "[parameters('baseName')]",
"sshKeyPath" : "/home/core/.ssh/authorized_keys",
"identityName" : "[concat(parameters('baseName'), '-identity')]",
"imageName" : "[concat(parameters('baseName'), '-image')]",
"masterAvailabilitySetName" : "[concat(parameters('baseName'), '-avset')]",
"numberOfNodes" : "[parameters('numberOfNodes')]",
"vms" : {
"copy" : [
{
"name" : "vmNames",
"count" : "[parameters('numberOfNodes')]",
"input" : {
"name" : "[concat(parameters('baseName'), string('-worker-'), string(copyIndex('vmNames')))]"
}
}
]
}
},
"resources" : [
{
"name": "[parameters('diagnosticsStorageAccountName')]",
"type": "Microsoft.Storage/storageAccounts",
"apiVersion": "2017-10-01",
"location": "[variables('location')]",
"properties": {},
"kind": "Storage",
"sku": {
"name": "Standard_LRS"
}
},
{
"apiVersion" : "2017-10-01",
"type" : "Microsoft.Network/networkInterfaces",
"location": "[variables('location')]",
"copy" : {
"name" : "nicCopy",
"count" : "[variables('numberOfNodes')]"
},
"name" : "[concat(variables('vms').vmNames[copyIndex()].name, '-nic')]",
"properties" : {
"ipConfigurations" : [
{
"name" : "pipConfig",
"properties" : {
"privateIPAllocationMethod" : "Dynamic",
"subnet" : {
"id" : "[variables('nodeSubnetRef')]"
}
}
}
]
}
},
{
"apiVersion" : "2017-12-01",
"type" : "Microsoft.Compute/virtualMachines",
"location" : "[variables('location')]",
"copy" : {
"name" : "vmCopy",
"count" : "[variables('numberOfNodes')]"
},
"name" : "[variables('vms').vmNames[copyIndex()].name]",
"dependsOn" : [
"[concat('Microsoft.Network/networkInterfaces/', concat(variables('vms').vmNames[copyIndex()].name, '-nic'))]",
"[concat('Microsoft.Storage/storageAccounts/', parameters('diagnosticsStorageAccountName'))]"
],
"properties" : {
"availabilitySet": {
"id": "[resourceId('Microsoft.Compute/availabilitySets',variables('masterAvailabilitySetName'))]"
},
"hardwareProfile" : {
"vmSize" : "[parameters('nodeVMSize')]"
},
"osProfile" : {
"computerName" : "[variables('vms').vmNames[copyIndex()].name]",
"adminUsername" : "core",
"customData" : "[parameters('workerIgnition')]",
"linuxConfiguration" : {
"disablePasswordAuthentication" : true,
"ssh" : {
"publicKeys" : [
{
"path" : "[variables('sshKeyPath')]",
"keyData" : "[parameters('sshKeyData')]"
}
]
}
}
},
"storageProfile" : {
"imageReference": {
"id": "[resourceId('Microsoft.Compute/images', variables('imageName'))]"
},
"osDisk" : {
"name": "[concat(variables('vms').vmNames[copyIndex()].name,'_OSDisk')]",
"osType" : "Linux",
"createOption" : "FromImage",
"managedDisk": {
"storageAccountType": "Standard_LRS"
},
"diskSizeGB": 128
}
},
"networkProfile" : {
"networkInterfaces" : [
{
"id" : "[resourceId('Microsoft.Network/networkInterfaces', concat(variables('vms').vmNames[copyIndex()].name, '-nic'))]",
"properties": {
"primary": true
}
}
]
},
"diagnosticsProfile": {
"bootDiagnostics": {
"enabled": true,
"storageUri": "[reference(resourceId('Microsoft.Storage/storageAccounts', parameters('diagnosticsStorageAccountName'))).primaryEndpoints.blob]"
}
}
}
}
]
}
You can install the OpenShift CLI (oc
) to interact with OpenShift Container Platform from a
command-line interface. You can install oc
on Linux, Windows, or macOS.
If you installed an earlier version of |
You can install the OpenShift CLI (oc
) binary on Linux by using the following procedure.
Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal.
Select the appropriate version in the Version drop-down menu.
Click Download Now next to the OpenShift v4.9 Linux Client entry and save the file.
Unpack the archive:
$ tar xvf <file>
Place the oc
binary in a directory that is on your PATH
.
To check your PATH
, execute the following command:
$ echo $PATH
After you install the OpenShift CLI, it is available using the oc
command:
$ oc <command>
You can install the OpenShift CLI (oc
) binary on Windows by using the following procedure.
Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal.
Select the appropriate version in the Version drop-down menu.
Click Download Now next to the OpenShift v4.9 Windows Client entry and save the file.
Unzip the archive with a ZIP program.
Move the oc
binary to a directory that is on your PATH
.
To check your PATH
, open the command prompt and execute the following command:
C:\> path
After you install the OpenShift CLI, it is available using the oc
command:
C:\> oc <command>
You can install the OpenShift CLI (oc
) binary on macOS by using the following procedure.
Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal.
Select the appropriate version in the Version drop-down menu.
Click Download Now next to the OpenShift v4.9 MacOSX Client entry and save the file.
Unpack and unzip the archive.
Move the oc
binary to a directory on your PATH.
To check your PATH
, open a terminal and execute the following command:
$ echo $PATH
After you install the OpenShift CLI, it is available using the oc
command:
$ oc <command>
You can log in to your cluster as a default system user by exporting the cluster kubeconfig
file.
The kubeconfig
file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server.
The file is specific to a cluster and is created during OpenShift Container Platform installation.
You deployed an OpenShift Container Platform cluster.
You installed the oc
CLI.
Export the kubeadmin
credentials:
$ export KUBECONFIG=<installation_directory>/auth/kubeconfig (1)
1 | For <installation_directory> , specify the path to the directory that you stored
the installation files in. |
Verify you can run oc
commands successfully using the exported configuration:
$ oc whoami
system:admin
When you add machines to a cluster, two pending certificate signing requests (CSRs) are generated for each machine that you added. You must confirm that these CSRs are approved or, if necessary, approve them yourself. The client requests must be approved first, followed by the server requests.
You added machines to your cluster.
Confirm that the cluster recognizes the machines:
$ oc get nodes
NAME STATUS ROLES AGE VERSION
master-0 Ready master 63m v1.22.1
master-1 Ready master 63m v1.22.1
master-2 Ready master 64m v1.22.1
The output lists all of the machines that you created.
The preceding output might not include the compute nodes, also known as worker nodes, until some CSRs are approved. |
Review the pending CSRs and ensure that you see the client requests with the Pending
or Approved
status for each machine that you added to the cluster:
$ oc get csr
NAME AGE REQUESTOR CONDITION
csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending
csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending
...
In this example, two machines are joining the cluster. You might see more approved CSRs in the list.
If the CSRs were not approved, after all of the pending CSRs for the machines you added are in Pending
status, approve the CSRs for your cluster machines:
Because the CSRs rotate automatically, approve your CSRs within an hour of adding the machines to the cluster. If you do not approve them within an hour, the certificates will rotate, and more than two certificates will be present for each node. You must approve all of these certificates. After the client CSR is approved, the Kubelet creates a secondary CSR for the serving certificate, which requires manual approval. Then, subsequent serving certificate renewal requests are automatically approved by the |
For clusters running on platforms that are not machine API enabled, such as bare metal and other user-provisioned infrastructure, you must implement a method of automatically approving the kubelet serving certificate requests (CSRs). If a request is not approved, then the |
To approve them individually, run the following command for each valid CSR:
$ oc adm certificate approve <csr_name> (1)
1 | <csr_name> is the name of a CSR from the list of current CSRs. |
To approve all pending CSRs, run the following command:
$ oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve
Some Operators might not become available until some CSRs are approved. |
Now that your client requests are approved, you must review the server requests for each machine that you added to the cluster:
$ oc get csr
NAME AGE REQUESTOR CONDITION
csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending
csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending
...
If the remaining CSRs are not approved, and are in the Pending
status, approve the CSRs for your cluster machines:
To approve them individually, run the following command for each valid CSR:
$ oc adm certificate approve <csr_name> (1)
1 | <csr_name> is the name of a CSR from the list of current CSRs. |
To approve all pending CSRs, run the following command:
$ oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs oc adm certificate approve
After all client and server CSRs have been approved, the machines have the Ready
status. Verify this by running the following command:
$ oc get nodes
NAME STATUS ROLES AGE VERSION
master-0 Ready master 73m v1.22.1
master-1 Ready master 73m v1.22.1
master-2 Ready master 74m v1.22.1
worker-0 Ready worker 11m v1.22.1
worker-1 Ready worker 11m v1.22.1
It can take a few minutes after approval of the server CSRs for the machines to transition to the |
For more information on CSRs, see Certificate Signing Requests.
If you removed the dns Zone configuration when creating Kubernetes manifests and
generating Ignition configs, you must manually create dns records that point at
the Ingress load balancer. You can create either a wildcard
*.apps.{baseDomain}.
or specific records. You can use A, CNAME, and other
records per your requirements.
You deployed an OpenShift Container Platform cluster on Microsoft Azure Stack Hub by using infrastructure that you provisioned.
Install the OpenShift CLI (oc
).
Install or update the Azure CLI.
Confirm the Ingress router has created a load balancer and populated the
EXTERNAL-IP
field:
$ oc -n openshift-ingress get service router-default
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
router-default LoadBalancer 172.30.20.10 35.130.120.110 80:32288/TCP,443:31215/TCP 20
Export the Ingress router IP as a variable:
$ export PUBLIC_IP_ROUTER=`oc -n openshift-ingress get service router-default --no-headers | awk '{print $4}'`
Add a *.apps
record to the dns zone.
If you are adding this cluster to a new dns zone, run:
$ az network dns record-set a add-record -g ${BASE_DOMAIN_RESOURCE_GROUP} -z ${CLUSTER_NAME}.${BASE_DOMAIN} -n *.apps -a ${PUBLIC_IP_ROUTER} --ttl 300
If you are adding this cluster to an already existing dns zone, run:
$ az network dns record-set a add-record -g ${BASE_DOMAIN_RESOURCE_GROUP} -z ${BASE_DOMAIN} -n *.apps.${CLUSTER_NAME} -a ${PUBLIC_IP_ROUTER} --ttl 300
If you prefer to add explicit domains instead of using a wildcard, you can create entries for each of the cluster’s current routes:
$ oc get --all-namespaces -o jsonpath='{range .items[*]}{range .status.ingress[*]}{.host}{"\n"}{end}{end}' routes
oauth-openshift.apps.cluster.basedomain.com
console-openshift-console.apps.cluster.basedomain.com
downloads-openshift-console.apps.cluster.basedomain.com
alertmanager-main-openshift-monitoring.apps.cluster.basedomain.com
grafana-openshift-monitoring.apps.cluster.basedomain.com
prometheus-k8s-openshift-monitoring.apps.cluster.basedomain.com
After you start the OpenShift Container Platform installation on Microsoft Azure Stack Hub user-provisioned infrastructure, you can monitor the cluster events until the cluster is ready.
Deploy the bootstrap machine for an OpenShift Container Platform cluster on user-provisioned Azure Stack Hub infrastructure.
Install the oc
CLI and log in.
Complete the cluster installation:
$ ./openshift-install --dir <installation_directory> wait-for install-complete (1)
INFO Waiting up to 30m0s for the cluster to initialize...
1 | For <installation_directory> , specify the path to the directory that you
stored the installation files in. |
|
See About remote health monitoring for more information about the Telemetry service.