$ openstack role add --user <user> --project <project> swiftoperator
install-config.yaml
file for OpenStackIn OKD version 4.9, you can install a customized cluster on
OpenStack. To customize the installation, modify parameters in the install-config.yaml
before you install the cluster.
You reviewed details about the OKD installation and update processes.
You read the documentation on selecting a cluster installation method and preparing it for users.
You verified that OKD 4.9 is compatible with your OpenStack version by using the Supported platforms for OpenShift clusters section. You can also compare platform support across different versions by viewing the OKD on OpenStack support matrix.
You have a storage service installed in OpenStack, such as block storage (Cinder) or object storage (Swift). Object storage is the recommended storage technology for OKD registry cluster deployment. For more information, see Optimizing storage.
You have the metadata service enabled in OpenStack.
To support an OKD installation, your OpenStack quota must meet the following requirements:
Resource | Value |
---|---|
Floating IP addresses |
3 |
Ports |
15 |
Routers |
1 |
Subnets |
1 |
RAM |
88 GB |
vCPUs |
22 |
Volume storage |
275 GB |
Instances |
7 |
Security groups |
3 |
Security group rules |
60 |
A cluster might function with fewer than recommended resources, but its performance is not guaranteed.
If OpenStack object storage (Swift) is available and operated by a user account with the |
By default, your security group and security group rule quotas might be low. If you encounter problems, run openstack quota set --secgroups 3 --secgroup-rules 60 <project> as an administrator to increase them.
|
An OKD deployment comprises control plane machines, compute machines, and a bootstrap machine.
By default, the OKD installation process creates three control plane machines.
Each machine requires:
An instance from the OpenStack quota
A port from the OpenStack quota
A flavor with at least 16 GB memory and 4 vCPUs
At least 100 GB storage space from the OpenStack quota
By default, the OKD installation process creates three compute machines.
Each machine requires:
An instance from the OpenStack quota
A port from the OpenStack quota
A flavor with at least 8 GB memory and 2 vCPUs
At least 100 GB storage space from the OpenStack quota
Compute machines host the applications that you run on OKD; aim to run as many as you can. |
During installation, a bootstrap machine is temporarily provisioned to stand up the control plane. After the production control plane is ready, the bootstrap machine is deprovisioned.
The bootstrap machine requires:
An instance from the OpenStack quota
A port from the OpenStack quota
A flavor with at least 16 GB memory and 4 vCPUs
At least 100 GB storage space from the OpenStack quota
Swift is operated by a user account with the swiftoperator
role. Add the role to an account before you run the installation program.
If the OpenStack object storage service, commonly known as Swift, is available, OKD uses it as the image registry storage. If it is unavailable, the installation program relies on the OpenStack block storage service, commonly known as Cinder. If Swift is present and you want to use it, you must enable access to it. If it is not present, or if you do not want to use it, skip this section. |
You have a OpenStack administrator account on the target environment.
The Swift service is installed.
On Ceph RGW, the account in url
option is enabled.
To enable Swift on OpenStack:
As an administrator in the OpenStack CLI, add the swiftoperator
role to the account that will access Swift:
$ openstack role add --user <user> --project <project> swiftoperator
Your OpenStack deployment can now use Swift for the image registry.
After you install a cluster on OpenStack, you can use a Cinder volume that is in a specific availability zone for registry storage.
Create a YAML file that specifies the storage class and availability zone to use. For example:
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: custom-csi-storageclass
provisioner: cinder.csi.openstack.org
volumeBindingMode: WaitForFirstConsumer
allowVolumeExpansion: true
parameters:
availability: <availability_zone_name>
OKD does not verify the existence of the availability zone you choose. Verify the name of the availability zone before you apply the configuration. |
From a command line, apply the configuration:
$ oc apply -f <storage_class_file_name>
storageclass.storage.k8s.io/custom-csi-storageclass created
Create a YAML file that specifies a persistent volume claim (PVC) that uses your storage class and the openshift-image-registry
namespace. For example:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: csi-pvc-imageregistry
namespace: openshift-image-registry (1)
annotations:
imageregistry.openshift.io: "true"
spec:
accessModes:
- ReadWriteOnce
volumeMode: Filesystem
resources:
requests:
storage: 100Gi (2)
storageClassName: <your_custom_storage_class> (3)
1 | Enter the namespace openshift-image-registry . This namespace allows the Cluster Image Registry Operator to consume the PVC. |
2 | Optional: Adjust the volume size. |
3 | Enter the name of the storage class that you created. |
From a command line, apply the configuration:
$ oc apply -f <pvc_file_name>
persistentvolumeclaim/csi-pvc-imageregistry created
Replace the original persistent volume claim in the image registry configuration with the new claim:
$ oc patch configs.imageregistry.operator.openshift.io/cluster --type 'json' -p='[{"op": "replace", "path": "/spec/storage/pvc/claim", "value": "csi-pvc-imageregistry"}]'
config.imageregistry.operator.openshift.io/cluster patched
Over the next several minutes, the configuration is updated.
To confirm that the registry is using the resources that you defined:
Verify that the PVC claim value is identical to the name that you provided in your PVC definition:
$ oc get configs.imageregistry.operator.openshift.io/cluster -o yaml
...
status:
...
managementState: Managed
pvc:
claim: csi-pvc-imageregistry
...
Verify that the status of the PVC is Bound
:
$ oc get pvc -n openshift-image-registry csi-pvc-imageregistry
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
csi-pvc-imageregistry Bound pvc-72a8f9c9-f462-11e8-b6b6-fa163e18b7b5 100Gi RWO custom-csi-storageclass 11m
The OKD installation process requires external network access. You must provide an external network value to it, or deployment fails. Before you begin the process, verify that a network with the external router type exists in OpenStack.
Using the OpenStack CLI, verify the name and ID of the 'External' network:
$ openstack network list --long -c ID -c Name -c "Router Type"
+--------------------------------------+----------------+-------------+
| ID | Name | Router Type |
+--------------------------------------+----------------+-------------+
| 148a8023-62a7-4672-b018-003462f8d7dc | public_network | External |
+--------------------------------------+----------------+-------------+
A network with an external router type appears in the network list. If at least one does not, see Creating a default floating IP network and Creating a default provider network.
If the external network’s CIDR range overlaps one of the default network ranges, you must change the matching network ranges in the The default network ranges are:
|
If the installation program finds multiple networks with the same name, it sets one of them at random. To avoid this behavior, create unique names for resources in OpenStack. |
If the Neutron trunk service plugin is enabled, a trunk port is created by default. For more information, see Neutron trunk port. |
The OKD installation program relies on a file that is called clouds.yaml
. The file describes OpenStack configuration parameters, including the project name, log in information, and authorization service URLs.
Create the clouds.yaml
file:
If your OpenStack distribution includes the Horizon web UI, generate a clouds.yaml
file in it.
Remember to add a password to the |
If your OpenStack distribution does not include the Horizon web UI, or you do not want to use Horizon, create the file yourself. For detailed information about clouds.yaml
, see Config files in the OpenStack documentation.
clouds:
shiftstack:
auth:
auth_url: http://10.10.14.42:5000/v3
project_name: shiftstack
username: shiftstack_user
password: XXX
user_domain_name: Default
project_domain_name: Default
dev-env:
region_name: RegionOne
auth:
username: 'devuser'
password: XXX
project_name: 'devonly'
auth_url: 'https://10.10.14.22:5001/v2.0'
If your OpenStack installation uses self-signed certificate authority (CA) certificates for endpoint authentication:
Copy the certificate authority file to your machine.
Add the cacerts
key to the clouds.yaml
file. The value must be an absolute, non-root-accessible path to the CA certificate:
clouds:
shiftstack:
...
cacert: "/etc/pki/ca-trust/source/anchors/ca.crt.pem"
After you run the installer with a custom CA certificate, you can update the certificate by editing the value of the
|
Place the clouds.yaml
file in one of the following locations:
The value of the OS_CLIENT_CONFIG_FILE
environment variable
The current directory
A Unix-specific user configuration directory, for example ~/.config/openstack/clouds.yaml
A Unix-specific site configuration directory, for example /etc/openstack/clouds.yaml
The installation program searches for clouds.yaml
in that order.
Optionally, you can edit the cloud provider configuration for your cluster. The cloud provider configuration controls how OKD interacts with OpenStack.
For a complete list of cloud provider configuration parameters, see the "OpenStack cloud configuration reference guide" page in the "Installing on OpenStack" documentation.
If you have not already generated manifest files for your cluster, generate them by running the following command:
$ openshift-install --dir <destination_directory> create manifests
In a text editor, open the cloud-provider configuration manifest file. For example:
$ vi openshift/manifests/cloud-provider-config.yaml
Modify the options based on the cloud configuration specification.
Configuring Octavia for load balancing is a common case for clusters that do not use Kuryr. For example:
#...
[LoadBalancer]
use-octavia=true (1)
lb-provider = "amphora" (2)
floating-network-id="d3deb660-4190-40a3-91f1-37326fe6ec4a"(3)
#...
1 | This property enables Octavia integration. |
2 | This property sets the Octavia provider that your load balancer uses. It accepts "ovn" or "amphora" as values. If you choose to use OVN, you must also set lb-method to SOURCE_IP_PORT . |
3 | This property is required if you want to use multiple external networks with your cluster. The cloud provider creates floating IP addresses on the network that is specified here. |
Prior to saving your changes, verify that the file is structured correctly. Clusters might fail if properties are not placed in the appropriate section. |
For installations that use Kuryr, Kuryr handles relevant services. There is no need to configure Octavia load balancing in the cloud provider. |
Save the changes to the file and proceed with installation.
You can update your cloud provider configuration after you run the installer. On a command line, run:
After you save your changes, your cluster will take some time to reconfigure itself. The process is complete if none of your nodes have a |
For more information about cloud provider configuration, see OpenStack cloud provider options.
Before you install OKD, download the installation file on a local computer.
You have a computer that runs Linux or macOS, with 500 MB of local disk space
Download installer from https://github.com/openshift/okd/releases
The installation program creates several files on the computer that you use to install your cluster. You must keep the installation program and the files that the installation program creates after you finish installing the cluster. Both files are required to delete the cluster. |
Deleting the files created by the installation program does not remove your cluster, even if the cluster failed during installation. To remove your cluster, complete the OKD uninstallation procedures for your specific cloud provider. |
Extract the installation program. For example, on a computer that uses a Linux operating system, run the following command:
$ tar -xvf openshift-install-linux.tar.gz
Download your installation pull secret from the Red Hat OpenShift Cluster Manager. This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OKD components.
Using a pull secret from the Red Hat OpenShift Cluster Manager is not required. You can use a pull secret for another private registry. Or, if you do not need the cluster to pull images from a private registry, you can use {"auths":{"fake":{"auth":"aWQ6cGFzcwo="}}}
as the pull secret when prompted during the installation.
If you do not use the pull secret from the Red Hat OpenShift Cluster Manager:
Red Hat Operators are not available.
The Telemetry and Insights operators do not send data to Red Hat.
Content from the Red Hat Container Catalog registry, such as image streams and Operators, are not available.
You can customize the OKD cluster you install on OpenStack.
Obtain the OKD installation program and the pull secret for your cluster.
Obtain service principal permissions at the subscription level.
Create the install-config.yaml
file.
Change to the directory that contains the installation program and run the following command:
$ ./openshift-install create install-config --dir <installation_directory> (1)
1 | For <installation_directory> , specify the directory name to store the
files that the installation program creates. |
Specify an empty directory. Some installation assets, like bootstrap X.509 certificates have short expiration intervals, so you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OKD version. |
At the prompts, provide the configuration details for your cloud:
Optional: Select an SSH key to use to access your cluster machines.
For production OKD clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your |
Select openstack as the platform to target.
Specify the OpenStack external network name to use for installing the cluster.
Specify the floating IP address to use for external access to the OpenShift API.
Specify a OpenStack flavor with at least 16 GB RAM to use for control plane nodes and 8 GB RAM for compute nodes.
Select the base domain to deploy the cluster to. All DNS records will be sub-domains of this base and will also include the cluster name.
Enter a name for your cluster. The name must be 14 or fewer characters long.
Paste the pull secret from the Red Hat OpenShift Cluster Manager. This field is optional.
Modify the install-config.yaml
file. You can find more information about
the available parameters in the "Installation configuration parameters" section.
Back up the install-config.yaml
file so that you can use
it to install multiple clusters.
The |
See Installation configuration parameters section for more information about the available parameters.
Production environments can deny direct access to the internet and instead have
an HTTP or HTTPS proxy available. You can configure a new OKD
cluster to use a proxy by configuring the proxy settings in the
install-config.yaml
file.
You have an existing install-config.yaml
file.
You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the Proxy
object’s spec.noProxy
field to bypass the proxy if necessary.
The For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and OpenStack, the |
Edit your install-config.yaml
file and add the proxy settings. For example:
apiVersion: v1
baseDomain: my.domain.com
proxy:
httpProxy: http://<username>:<pswd>@<ip>:<port> (1)
httpsProxy: https://<username>:<pswd>@<ip>:<port> (2)
noProxy: example.com (3)
additionalTrustBundle: | (4)
-----BEGIN CERTIFICATE-----
<MY_TRUSTED_CA_CERT>
-----END CERTIFICATE-----
...
1 | A proxy URL to use for creating HTTP connections outside the cluster. The
URL scheme must be http . |
2 | A proxy URL to use for creating HTTPS connections outside the cluster. |
3 | A comma-separated list of destination domain names, IP addresses, or
other network CIDRs to exclude from proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass the proxy for all destinations. |
4 | If provided, the installation program generates a config map that is named user-ca-bundle in
the openshift-config namespace to hold the additional CA
certificates. If you provide additionalTrustBundle and at least one proxy setting, the Proxy object is configured to reference the user-ca-bundle config map in the trustedCA field. The Cluster Network
Operator then creates a trusted-ca-bundle config map that merges the contents specified for the trustedCA parameter
with the FCOS trust bundle. The additionalTrustBundle field is required unless
the proxy’s identity certificate is signed by an authority from the FCOS trust
bundle. |
The installation program does not support the proxy |
Save the file and reference it when installing OKD.
The installation program creates a cluster-wide proxy that is named cluster
that uses the proxy
settings in the provided install-config.yaml
file. If no proxy settings are
provided, a cluster
Proxy
object is still created, but it will have a nil
spec
.
Only the |
Before you deploy an OKD cluster, you provide parameter values to describe your account on the cloud platform that hosts your cluster and optionally customize your cluster’s platform. When you create the install-config.yaml
installation configuration file, you provide values for the required parameters through the command line. If you customize your cluster, you can modify the install-config.yaml
file to provide more details about the platform.
After installation, you cannot modify these parameters in the |
The |
Required installation configuration parameters are described in the following table:
Parameter | Description | Values |
---|---|---|
|
The API version for the |
String |
|
The base domain of your cloud provider. The base domain is used to create routes to your OKD cluster components. The full DNS name for your cluster is a combination of the |
A fully-qualified domain or subdomain name, such as |
|
Kubernetes resource |
Object |
|
The name of the cluster. DNS records for the cluster are all subdomains of |
String of lowercase letters, hyphens ( |
|
The configuration for the specific platform upon which to perform the installation: |
Object |
You can customize your installation configuration based on the requirements of your existing network infrastructure. For example, you can expand the IP address block for the cluster network or provide different IP address blocks than the defaults.
Only IPv4 addresses are supported.
Parameter | Description | Values | ||
---|---|---|---|---|
|
The configuration for the cluster network. |
Object
|
||
|
The cluster network provider Container Network Interface (CNI) plugin to install. |
Either |
||
|
The IP address blocks for pods. The default value is If you specify multiple IP address blocks, the blocks must not overlap. |
An array of objects. For example:
|
||
|
Required if you use An IPv4 network. |
An IP address block in Classless Inter-Domain Routing (CIDR) notation.
The prefix length for an IPv4 block is between |
||
|
The subnet prefix length to assign to each individual node. For example, if |
A subnet prefix. The default value is |
||
|
The IP address block for services. The default value is The OpenShift SDN and OVN-Kubernetes network providers support only a single IP address block for the service network. |
An array with an IP address block in CIDR format. For example:
|
||
|
The IP address blocks for machines. If you specify multiple IP address blocks, the blocks must not overlap. |
An array of objects. For example:
|
||
|
Required if you use |
An IP network block in CIDR notation. For example,
|
Optional installation configuration parameters are described in the following table:
Parameter | Description | Values | ||||
---|---|---|---|---|---|---|
|
A PEM-encoded X.509 certificate bundle that is added to the nodes' trusted certificate store. This trust bundle may also be used when a proxy has been configured. |
String |
||||
|
The configuration for the machines that comprise the compute nodes. |
Array of |
||||
|
Determines the instruction set architecture of the machines in the pool. Currently, heteregeneous clusters are not supported, so all pools must specify the same architecture. Valid values are |
String |
||||
|
Whether to enable or disable simultaneous multithreading, or
|
|
||||
|
Required if you use |
|
||||
|
Required if you use |
|
||||
|
The number of compute machines, which are also known as worker machines, to provision. |
A positive integer greater than or equal to |
||||
|
The configuration for the machines that comprise the control plane. |
Array of |
||||
|
Determines the instruction set architecture of the machines in the pool. Currently, heterogeneous clusters are not supported, so all pools must specify the same architecture. Valid values are |
String |
||||
|
Whether to enable or disable simultaneous multithreading, or
|
|
||||
|
Required if you use |
|
||||
|
Required if you use |
|
||||
|
The number of control plane machines to provision. |
The only supported value is |
||||
|
The Cloud Credential Operator (CCO) mode. If no mode is specified, the CCO dynamically tries to determine the capabilities of the provided credentials, with a preference for mint mode on the platforms where multiple modes are supported.
|
|
||||
|
Sources and repositories for the release-image content. |
Array of objects. Includes a |
||||
|
Required if you use |
String |
||||
|
Specify one or more repositories that may also contain the same images. |
Array of strings |
||||
|
How to publish or expose the user-facing endpoints of your cluster, such as the Kubernetes API, OpenShift routes. |
Setting this field to |
||||
|
The SSH key or keys to authenticate access your cluster machines.
|
One or more keys. For example:
|
Additional OpenStack configuration parameters are described in the following table:
Parameter | Description | Values |
---|---|---|
|
For compute machines, the size in gigabytes of the root volume. If you do not set this value, machines use ephemeral storage. |
Integer, for example |
|
For compute machines, the root volume’s type. |
String, for example |
|
For control plane machines, the size in gigabytes of the root volume. If you do not set this value, machines use ephemeral storage. |
Integer, for example |
|
For control plane machines, the root volume’s type. |
String, for example |
|
The name of the OpenStack cloud to use from the list of clouds in the
|
String, for example |
|
The OpenStack external network name to be used for installation. |
String, for example |
|
The OpenStack flavor to use for control plane and compute machines. This property is deprecated. To use a flavor as the default for all machine pools, add it as the value of the |
String, for example |
Optional OpenStack configuration parameters are described in the following table:
Parameter | Description | Values |
---|---|---|
|
Additional networks that are associated with compute machines. Allowed address pairs are not created for additional networks. |
A list of one or more UUIDs as strings. For example, |
|
Additional security groups that are associated with compute machines. |
A list of one or more UUIDs as strings. For example, |
|
OpenStack Compute (Nova) availability zones (AZs) to install machines on. If this parameter is not set, the installer relies on the default settings for Nova that the OpenStack administrator configured. On clusters that use Kuryr, OpenStack Octavia does not support availability zones. Load balancers and, if you are using the Amphora provider driver, OKD services that rely on Amphora VMs, are not created according to the value of this property. |
A list of strings. For example, |
|
For compute machines, the availability zone to install root volumes on. If you do not set a value for this parameter, the installer selects the default availability zone. |
A list of strings, for example |
|
Additional networks that are associated with control plane machines. Allowed address pairs are not created for additional networks. |
A list of one or more UUIDs as strings. For example, |
|
Additional security groups that are associated with control plane machines. |
A list of one or more UUIDs as strings. For example, |
|
OpenStack Compute (Nova) availability zones (AZs) to install machines on. If this parameter is not set, the installer relies on the default settings for Nova that the OpenStack administrator configured. On clusters that use Kuryr, OpenStack Octavia does not support availability zones. Load balancers and, if you are using the Amphora provider driver, OKD services that rely on Amphora VMs, are not created according to the value of this property. |
A list of strings. For example, |
|
For control plane machines, the availability zone to install root volumes on. If you do not set this value, the installer selects the default availability zone. |
A list of strings, for example |
|
The location from which the installer downloads the FCOS image. You must set this parameter to perform an installation in a restricted network. |
An HTTP or HTTPS URL, optionally with an SHA-256 checksum. For example, |
|
Properties to add to the installer-uploaded ClusterOSImage in Glance. This property is ignored if You can use this property to exceed the default persistent volume (PV) limit for OpenStack of 26 PVs per node. To exceed the limit, set the You can also use this property to enable the QEMU guest agent by including the |
A list of key-value string pairs. For example, |
|
The default machine pool platform configuration. |
|
|
An existing floating IP address to associate with the ingress port. To use this property, you must also define the |
An IP address, for example |
|
An existing floating IP address to associate with the API load balancer. To use this property, you must also define the |
An IP address, for example |
|
IP addresses for external DNS servers that cluster instances use for DNS resolution. |
A list of IP addresses as strings. For example, |
|
The UUID of a OpenStack subnet that the cluster’s nodes use. Nodes and virtual IP (VIP) ports are created on this subnet. The first item in If you deploy to a custom subnet, you cannot specify an external DNS server to the OKD installer. Instead, add DNS to the subnet in OpenStack. |
A UUID as a string. For example, |
Optionally, you can deploy a cluster on a OpenStack subnet of your choice. The subnet’s GUID is passed as the value of platform.openstack.machinesSubnet
in the install-config.yaml
file.
This subnet is used as the cluster’s primary subnet. By default, nodes and ports are created on it. You can create nodes and ports on a different OpenStack subnet by setting the value of the platform.openstack.machinesSubnet
property to the subnet’s UUID.
Before you run the OKD installer with a custom subnet, verify that your configuration meets the following requirements:
The subnet that is used by platform.openstack.machinesSubnet
has DHCP enabled.
The CIDR of platform.openstack.machinesSubnet
matches the CIDR of networking.machineNetwork
.
The installation program user has permission to create ports on this network, including ports with fixed IP addresses.
Clusters that use custom subnets have the following limitations:
If you plan to install a cluster that uses floating IP addresses, the platform.openstack.machinesSubnet
subnet must be attached to a router that is connected to the externalNetwork
network.
If the platform.openstack.machinesSubnet
value is set in the install-config.yaml
file, the installation program does not create a private network or subnet for your OpenStack machines.
You cannot use the platform.openstack.externalDNS
property at the same time as a custom subnet. To add DNS to a cluster that uses a custom subnet, configure DNS on the OpenStack network.
By default, the API VIP takes x.x.x.5 and the ingress VIP takes x.x.x.7 from your network’s CIDR block. To override these default values,
set values for |
If you want your cluster to use bare metal machines, modify the
install-config.yaml
file. Your cluster can have both control plane and compute machines running on bare metal, or just compute machines.
Bare-metal compute machines are not supported on clusters that use Kuryr.
Be sure that your |
The OpenStack Bare Metal service (Ironic) is enabled and accessible via the OpenStack Compute API.
Bare metal is available as a OpenStack flavor.
The OpenStack network supports both VM and bare metal server attachment.
Your network configuration does not rely on a provider network. Provider networks are not supported.
If you want to deploy the machines on a pre-existing network, a OpenStack subnet is provisioned.
If you want to deploy the machines on an installer-provisioned network, the OpenStack Bare Metal service (Ironic) is able to listen for and interact with Preboot eXecution Environment (PXE) boot machines that run on tenant networks.
You created an install-config.yaml
file as part of the OKD installation process.
In the install-config.yaml
file, edit the flavors for machines:
If you want to use bare-metal control plane machines, change the value of controlPlane.platform.openstack.type
to a bare metal flavor.
Change the value of compute.platform.openstack.type
to a bare metal flavor.
If you want to deploy your machines on a pre-existing network, change the value of platform.openstack.machinesSubnet
to the OpenStack subnet UUID of the network. Control plane and compute machines must use the same subnet.
install-config.yaml
filecontrolPlane:
platform:
openstack:
type: <bare_metal_control_plane_flavor> (1)
...
compute:
- architecture: amd64
hyperthreading: Enabled
name: worker
platform:
openstack:
type: <bare_metal_compute_flavor> (2)
replicas: 3
...
platform:
openstack:
machinesSubnet: <subnet_UUID> (3)
...
1 | If you want to have bare-metal control plane machines, change this value to a bare metal flavor. |
2 | Change this value to a bare metal flavor to use for compute machines. |
3 | If you want to use a pre-existing network, change this value to the UUID of the OpenStack subnet. |
Use the updated install-config.yaml
file to complete the installation process.
The compute machines that are created during deployment use the flavor that you
added to the file.
The installer may time out while waiting for bare metal machines to boot. If the installer times out, restart and then complete the deployment by using the
|
You can deploy your OKD clusters on OpenStack with a primary network interface on a provider network. Provider networks are commonly used to give projects direct access to a public network that can be used to reach the internet. You can also share provider networks among projects as part of the network creation process.
OpenStack provider networks map directly to an existing physical network in the data center. A OpenStack administrator must create them.
In the following example, OKD workloads are connected to a data center by using a provider network:
OKD clusters that are installed on provider networks do not require tenant networks or floating IP addresses. The installer does not create these resources during installation.
Example provider network types include flat (untagged) and VLAN (802.1Q tagged).
A cluster can support as many provider network connections as the network type allows. For example, VLAN networks typically support up to 4096 connections. |
You can learn more about provider and tenant networks in the OpenStack documentation.
Before you install an OKD cluster, your OpenStack deployment and provider network must meet a number of conditions:
The OpenStack networking service (Neutron) is enabled and accessible through the OpenStack networking API.
The OpenStack networking service has the port security and allowed address pairs extensions enabled.
The provider network can be shared with other tenants.
Use the |
The OpenStack project that you use to install the cluster must own the provider network, as well as an appropriate subnet.
To learn more about creating networks on OpenStack, read the provider networks documentation. |
If the cluster is owned by the admin
user, you must run the installer as that user to create ports on the network.
Provider networks must be owned by the OpenStack project that is used to create the cluster. If they are not, the OpenStack Compute service (Nova) cannot request a port from that network. |
Verify that the provider network can reach the OpenStack metadata service IP address, which is 169.254.169.254
by default.
Depending on your OpenStack SDN and networking service configuration, you might need to provide the route when you create the subnet. For example:
$ openstack subnet create --dhcp --host-route destination=169.254.169.254/32,gateway=192.0.2.2 ...
Optional: To secure the network, create role-based access control (RBAC) rules that limit network access to a single project.
You can deploy an OKD cluster that has its primary network interface on an OpenStack provider network. .Prerequisites
Your OpenStack deployment is configured as described by "OpenStack provider network requirements for cluster installation".
In a text editor, open the install-config.yaml
file.
Set the value of the platform.openstack.apiVIP
property to the IP address for the API VIP.
Set the value of the platform.openstack.ingressVIP
property to the IP address for the ingress VIP.
Set the value of the platform.openstack.machinesSubnet
property to the UUID of the provider network subnet.
Set the value of the networking.machineNetwork.cidr
property to the CIDR block of the provider network subnet.
The |
...
platform:
openstack:
apiVIP: 192.0.2.13
ingressVIP: 192.0.2.23
machinesSubnet: fa806b2f-ac49-4bce-b9db-124bc64209bf
# ...
networking:
machineNetwork:
- cidr: 192.0.2.0/24
You cannot set the |
When you deploy the cluster, the installer uses the install-config.yaml
file to deploy the cluster on the provider network.
You can add additional networks, including provider networks, to the After you deploy your cluster, you can attach pods to additional networks. For more information, see Understanding multiple networks. |
install-config.yaml
file for OpenStackThis sample install-config.yaml
demonstrates all of the possible OpenStack
customization options.
This sample file is provided for reference only. You must obtain your
install-config.yaml file by using the installation program.
|
apiVersion: v1
baseDomain: example.com
controlPlane:
name: master
platform: {}
replicas: 3
compute:
- name: worker
platform:
openstack:
type: ml.large
replicas: 3
metadata:
name: example
networking:
clusterNetwork:
- cidr: 10.128.0.0/14
hostPrefix: 23
machineNetwork:
- cidr: 10.0.0.0/16
serviceNetwork:
- 172.30.0.0/16
networkType: OVNKubernetes
platform:
openstack:
cloud: mycloud
externalNetwork: external
computeFlavor: m1.xlarge
apiFloatingIP: 128.0.0.1
pullSecret: '{"auths": ...}'
sshKey: ssh-ed25519 AAAA...
Optionally, you can set the affinity policy for compute machines during installation. The installer does not select an affinity policy for compute machines by default.
You can also create machine sets that use particular OpenStack server groups after installation.
Control plane machines are created with a |
You can learn more about OpenStack instance scheduling and placement in the OpenStack documentation. |
Create the install-config.yaml
file and complete any modifications to it.
Using the OpenStack command-line interface, create a server group for your compute machines. For example:
$ openstack \
--os-compute-api-version=2.15 \
server group create \
--policy anti-affinity \
my-openshift-worker-group
For more information, see the server group create
command documentation.
Change to the directory that contains the installation program and create the manifests:
$ ./openshift-install create manifests --dir <installation_directory>
where:
installation_directory
Specifies the name of the directory that contains the install-config.yaml
file for your cluster.
Open manifests/99_openshift-cluster-api_worker-machineset-0.yaml
, the MachineSet
definition file.
Add the property serverGroupID
to the definition beneath the spec.template.spec.providerSpec.value
property. For example:
apiVersion: machine.openshift.io/v1beta1
kind: MachineSet
metadata:
labels:
machine.openshift.io/cluster-api-cluster: <infrastructure_ID>
machine.openshift.io/cluster-api-machine-role: <node_role>
machine.openshift.io/cluster-api-machine-type: <node_role>
name: <infrastructure_ID>-<node_role>
namespace: openshift-machine-api
spec:
replicas: <number_of_replicas>
selector:
matchLabels:
machine.openshift.io/cluster-api-cluster: <infrastructure_ID>
machine.openshift.io/cluster-api-machineset: <infrastructure_ID>-<node_role>
template:
metadata:
labels:
machine.openshift.io/cluster-api-cluster: <infrastructure_ID>
machine.openshift.io/cluster-api-machine-role: <node_role>
machine.openshift.io/cluster-api-machine-type: <node_role>
machine.openshift.io/cluster-api-machineset: <infrastructure_ID>-<node_role>
spec:
providerSpec:
value:
apiVersion: openstackproviderconfig.openshift.io/v1alpha1
cloudName: openstack
cloudsSecret:
name: openstack-cloud-credentials
namespace: openshift-machine-api
flavor: <nova_flavor>
image: <glance_image_name_or_location>
serverGroupID: aaaaaaaa-bbbb-cccc-dddd-eeeeeeeeeeee (1)
kind: OpenstackProviderSpec
networks:
- filter: {}
subnets:
- filter:
name: <subnet_name>
tags: openshiftClusterID=<infrastructure_ID>
securityGroups:
- filter: {}
name: <infrastructure_ID>-<node_role>
serverMetadata:
Name: <infrastructure_ID>-<node_role>
openshiftClusterID: <infrastructure_ID>
tags:
- openshiftClusterID=<infrastructure_ID>
trunk: true
userDataSecret:
name: <node_role>-user-data
availabilityZone: <optional_openstack_availability_zone>
1 | Add the UUID of your server group here. |
Optional: Back up the manifests/99_openshift-cluster-api_worker-machineset-0.yaml
file. The installation program deletes the manifests/
directory when creating the cluster.
When you install the cluster, the installer uses the MachineSet
definition that you modified to create compute machines within your OpenStack server group.
During an OKD installation, you can provide an SSH public key to the installation program. The key is passed to the Fedora CoreOS (FCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys
list for the core
user on each node, which enables password-less authentication.
After the key is passed to the nodes, you can use the key pair to SSH in to the FCOS nodes as the user core
. To access the nodes through SSH, the private key identity must be managed by SSH for your local user.
If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather
command also requires the SSH public key to be in place on the cluster nodes.
Do not skip this procedure in production environments, where disaster recovery and debugging is required. |
On clusters running Fedora CoreOS (FCOS), the SSH keys specified in the Ignition config files are written to the |
If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command:
$ ssh-keygen -t ed25519 -N '' -f <path>/<file_name> (1)
1 | Specify the path and file name, such as ~/.ssh/id_ed25519 , of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory. |
If you plan to install an OKD cluster that uses FIPS Validated / Modules in Process cryptographic libraries on the |
View the public SSH key:
$ cat <path>/<file_name>.pub
For example, run the following to view the ~/.ssh/id_ed25519.pub
public key:
$ cat ~/.ssh/id_ed25519.pub
Add the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the ./openshift-install gather
command.
On some distributions, default SSH private key identities such as |
If the ssh-agent
process is not already running for your local user, start it as a background task:
$ eval "$(ssh-agent -s)"
Agent pid 31874
If your cluster is in FIPS mode, only use FIPS-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA. |
Add your SSH private key to the ssh-agent
:
$ ssh-add <path>/<file_name> (1)
1 | Specify the path and file name for your SSH private key, such as ~/.ssh/id_ed25519 |
Identity added: /home/<you>/<path>/<file_name> (<computer_name>)
When you install OKD, provide the SSH public key to the installation program.
At deployment, all OKD machines are created in a OpenStack-tenant network. Therefore, they are not accessible directly in most OpenStack deployments.
You can configure OKD API and application access by using floating IP addresses (FIPs) during installation. You can also complete an installation without configuring FIPs, but the installer will not configure a way to reach the API or applications externally.
Create floating IP (FIP) addresses for external access to the OKD API and cluster applications.
Using the OpenStack CLI, create the API FIP:
$ openstack floating ip create --description "API <cluster_name>.<base_domain>" <external_network>
Using the OpenStack CLI, create the apps, or ingress, FIP:
$ openstack floating ip create --description "ingress <cluster_name>.<base_domain>" <external_network>
Add records that follow these patterns to your DNS server for the API and ingress FIPs:
api.<cluster_name>.<base_domain>. IN A <API_FIP>
*.apps.<cluster_name>.<base_domain>. IN A <apps_FIP>
If you do not control the DNS server, you can access the cluster by adding the cluster domain names such as the following to your
The cluster domain names in the |
Add the FIPs to the
install-config.yaml
file as the values of the following
parameters:
platform.openstack.ingressFloatingIP
platform.openstack.apiFloatingIP
If you use these values, you must also enter an external network as the value of the
platform.openstack.externalNetwork
parameter in the install-config.yaml
file.
You can make OKD resources available outside of the cluster by assigning a floating IP address and updating your firewall configuration. |
You can install OKD on OpenStack without providing floating IP addresses.
In the
install-config.yaml
file, do not define the following
parameters:
platform.openstack.ingressFloatingIP
platform.openstack.apiFloatingIP
If you cannot provide an external network, you can also leave platform.openstack.externalNetwork
blank. If you do not provide a value for platform.openstack.externalNetwork
, a router is not created for you, and, without additional action, the installer will fail to retrieve an image from Glance. You must configure external connectivity on your own.
If you run the installer from a system that cannot reach the cluster API due to a lack of floating IP addresses or name resolution, installation fails. To prevent installation failure in these cases, you can use a proxy network or run the installer from a system that is on the same network as your machines.
You can enable name resolution by creating DNS records for the API and ingress ports. For example:
If you do not control the DNS server, you can add the record to your |
You can install OKD on a compatible cloud platform.
You can run the |
Obtain the OKD installation program and the pull secret for your cluster.
Change to the directory that contains the installation program and initialize the cluster deployment:
$ ./openshift-install create cluster --dir <installation_directory> \ (1)
--log-level=info (2)
1 | For <installation_directory> , specify the
location of your customized ./install-config.yaml file. |
2 | To view different installation details, specify warn , debug , or
error instead of info . |
If the cloud provider account that you configured on your host does not have sufficient permissions to deploy the cluster, the installation process stops, and the missing permissions are displayed. |
When the cluster deployment completes, directions for accessing your cluster,
including a link to its web console and credentials for the kubeadmin
user,
display in your terminal.
...
INFO Install complete!
INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig'
INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com
INFO Login to the console with user: "kubeadmin", and password: "4vYBz-Ee6gm-ymBZj-Wt5AL"
INFO Time elapsed: 36m22s
The cluster access and credential information also outputs to |
|
You must not delete the installation program or the files that the installation program creates. Both are required to delete the cluster. |
You can verify your OKD cluster’s status during or after installation.
In the cluster environment, export the administrator’s kubeconfig file:
$ export KUBECONFIG=<installation_directory>/auth/kubeconfig (1)
1 | For <installation_directory> , specify the path to the directory that you stored the installation files in. |
The kubeconfig
file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server.
View the control plane and compute machines created after a deployment:
$ oc get nodes
View your cluster’s version:
$ oc get clusterversion
View your Operators' status:
$ oc get clusteroperator
View all running pods in the cluster:
$ oc get pods -A
You can log in to your cluster as a default system user by exporting the cluster kubeconfig
file.
The kubeconfig
file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server.
The file is specific to a cluster and is created during OKD installation.
You deployed an OKD cluster.
You installed the oc
CLI.
Export the kubeadmin
credentials:
$ export KUBECONFIG=<installation_directory>/auth/kubeconfig (1)
1 | For <installation_directory> , specify the path to the directory that you stored
the installation files in. |
Verify you can run oc
commands successfully using the exported configuration:
$ oc whoami
system:admin
See Accessing the web console for more details about accessing and understanding the OKD web console.
See About remote health monitoring for more information about the Telemetry service
If necessary, you can opt out of remote health reporting.
If you need to enable external access to node ports, configure ingress cluster traffic by using a node port.
If you did not configure OpenStack to accept application traffic over floating IP addresses, configure OpenStack access with floating IP addresses.