This is a cache of https://docs.okd.io/4.13/installing/installing_openstack/installing-openstack-installer-kuryr.html. It is a snapshot of the page at 2024-11-19T23:46:07.616+0000.
Installing a cluster on OpenStack with Kuryr - Installing on OpenStack | Installing | OKD 4.13
×

Kuryr is a deprecated feature. Deprecated functionality is still included in OKD and continues to be supported; however, it will be removed in a future release of this product and is not recommended for new deployments.

For the most recent list of major functionality that has been deprecated or removed within OKD, refer to the Deprecated and removed features section of the OKD release notes.

In OKD version 4.13, you can install a customized cluster on OpenStack that uses Kuryr SDN. To customize the installation, modify parameters in the install-config.yaml before you install the cluster.

Prerequisites

About Kuryr SDN

Kuryr is a deprecated feature. Deprecated functionality is still included in OKD and continues to be supported; however, it will be removed in a future release of this product and is not recommended for new deployments.

For the most recent list of major functionality that has been deprecated or removed within OKD, refer to the Deprecated and removed features section of the OKD release notes.

Kuryr is a container network interface (CNI) plugin solution that uses the Neutron and Octavia OpenStack services to provide networking for pods and Services.

Kuryr and OKD integration is primarily designed for OKD clusters running on OpenStack VMs. Kuryr improves the network performance by plugging OKD pods into OpenStack SDN. In addition, it provides interconnectivity between pods and OpenStack virtual instances.

Kuryr components are installed as pods in OKD using the openshift-kuryr namespace:

  • kuryr-controller - a single service instance installed on a master node. This is modeled in OKD as a Deployment object.

  • kuryr-cni - a container installing and configuring Kuryr as a CNI driver on each OKD node. This is modeled in OKD as a DaemonSet object.

The Kuryr controller watches the OKD API server for pod, service, and namespace create, update, and delete events. It maps the OKD API calls to corresponding objects in Neutron and Octavia. This means that every network solution that implements the Neutron trunk port functionality can be used to back OKD via Kuryr. This includes open source solutions such as Open vSwitch (OVS) and Open Virtual Network (OVN) as well as Neutron-compatible commercial SDNs.

Kuryr is recommended for OKD deployments on encapsulated OpenStack tenant networks to avoid double encapsulation, such as running an encapsulated OKD SDN over an OpenStack network.

If you use provider networks or tenant VLANs, you do not need to use Kuryr to avoid double encapsulation. The performance benefit is negligible. Depending on your configuration, though, using Kuryr to avoid having two overlays might still be beneficial.

Kuryr is not recommended in deployments where all of the following criteria are true:

  • The OpenStack version is less than 16.

  • The deployment uses UDP services, or a large number of TCP services on few hypervisors.

or

  • The ovn-octavia Octavia driver is disabled.

  • The deployment uses a large number of TCP services on few hypervisors.

Resource guidelines for installing OKD on OpenStack with Kuryr

When using Kuryr SDN, the pods, services, namespaces, and network policies are using resources from the OpenStack quota; this increases the minimum requirements. Kuryr also has some additional requirements on top of what a default install requires.

Use the following quota to satisfy a default cluster’s minimum requirements:

Table 1. Recommended resources for a default OKD cluster on OpenStack with Kuryr
Resource Value

Floating IP addresses

3 - plus the expected number of Services of LoadBalancer type

Ports

1500 - 1 needed per Pod

Routers

1

Subnets

250 - 1 needed per Namespace/Project

Networks

250 - 1 needed per Namespace/Project

RAM

112 GB

vCPUs

28

Volume storage

275 GB

Instances

7

Security groups

250 - 1 needed per Service and per NetworkPolicy

Security group rules

1000

Server groups

2 - plus 1 for each additional availability zone in each machine pool

Load balancers

100 - 1 needed per Service

Load balancer listeners

500 - 1 needed per Service-exposed port

Load balancer pools

500 - 1 needed per Service-exposed port

A cluster might function with fewer than recommended resources, but its performance is not guaranteed.

If OpenStack object storage (Swift) is available and operated by a user account with the swiftoperator role, it is used as the default backend for the OKD image registry. In this case, the volume storage requirement is 175 GB. Swift space requirements vary depending on the size of the image registry.

If you are using OpenStack version 16 with the Amphora driver rather than the OVN Octavia driver, security groups are associated with service accounts instead of user projects.

Take the following notes into consideration when setting resources:

  • The number of ports that are required is larger than the number of pods. Kuryr uses ports pools to have pre-created ports ready to be used by pods and speed up the pods' booting time.

  • Each network policy is mapped into an OpenStack security group, and depending on the NetworkPolicy spec, one or more rules are added to the security group.

  • Each service is mapped to an OpenStack load balancer. Consider this requirement when estimating the number of security groups required for the quota.

    If you are using OpenStack version 15 or earlier, or the ovn-octavia driver, each load balancer has a security group with the user project.

  • The quota does not account for load balancer resources (such as VM resources), but you must consider these resources when you decide the OpenStack deployment’s size. The default installation will have more than 50 load balancers; the clusters must be able to accommodate them.

    If you are using OpenStack version 16 with the OVN Octavia driver enabled, only one load balancer VM is generated; services are load balanced through OVN flows.

An OKD deployment comprises control plane machines, compute machines, and a bootstrap machine.

To enable Kuryr SDN, your environment must meet the following requirements:

  • Run OpenStack 13+.

  • Have Overcloud with Octavia.

  • Use Neutron Trunk ports extension.

  • Use openvswitch firewall driver if ML2/OVS Neutron driver is used instead of ovs-hybrid.

Increasing quota

When using Kuryr SDN, you must increase quotas to satisfy the OpenStack resources used by pods, services, namespaces, and network policies.

Procedure
  • Increase the quotas for a project by running the following command:

    $ sudo openstack quota set --secgroups 250 --secgroup-rules 1000 --ports 1500 --subnets 250 --networks 250 <project>

Configuring Neutron

Kuryr CNI leverages the Neutron Trunks extension to plug containers into the OpenStack SDN, so you must use the trunks extension for Kuryr to properly work.

In addition, if you leverage the default ML2/OVS Neutron driver, the firewall must be set to openvswitch instead of ovs_hybrid so that security groups are enforced on trunk subports and Kuryr can properly handle network policies.

Configuring Octavia

Kuryr SDN uses OpenStack’s Octavia LBaaS to implement OKD services. Thus, you must install and configure Octavia components in OpenStack to use Kuryr SDN.

To enable Octavia, you must include the Octavia service during the installation of the OpenStack Overcloud, or upgrade the Octavia service if the Overcloud already exists. The following steps for enabling Octavia apply to both a clean install of the Overcloud or an Overcloud update.

The following steps only capture the key pieces required during the deployment of OpenStack when dealing with Octavia. It is also important to note that registry methods vary.

This example uses the local registry method.

Procedure
  1. If you are using the local registry, create a template to upload the images to the registry. For example:

    (undercloud) $ openstack overcloud container image prepare \
    -e /usr/share/openstack-tripleo-heat-templates/environments/services-docker/octavia.yaml \
    --namespace=registry.access.redhat.com/rhosp13 \
    --push-destination=<local-ip-from-undercloud.conf>:8787 \
    --prefix=openstack- \
    --tag-from-label {version}-{product-version} \
    --output-env-file=/home/stack/templates/overcloud_images.yaml \
    --output-images-file /home/stack/local_registry_images.yaml
  2. Verify that the local_registry_images.yaml file contains the Octavia images. For example:

    ...
    - imagename: registry.access.redhat.com/rhosp13/openstack-octavia-api:13.0-43
      push_destination: <local-ip-from-undercloud.conf>:8787
    - imagename: registry.access.redhat.com/rhosp13/openstack-octavia-health-manager:13.0-45
      push_destination: <local-ip-from-undercloud.conf>:8787
    - imagename: registry.access.redhat.com/rhosp13/openstack-octavia-housekeeping:13.0-45
      push_destination: <local-ip-from-undercloud.conf>:8787
    - imagename: registry.access.redhat.com/rhosp13/openstack-octavia-worker:13.0-44
      push_destination: <local-ip-from-undercloud.conf>:8787

    The Octavia container versions vary depending upon the specific OpenStack release installed.

  3. Pull the container images from registry.redhat.io to the Undercloud node:

    (undercloud) $ sudo openstack overcloud container image upload \
      --config-file  /home/stack/local_registry_images.yaml \
      --verbose

    This may take some time depending on the speed of your network and Undercloud disk.

  4. Install or update your Overcloud environment with Octavia:

    $ openstack overcloud deploy --templates \
      -e /usr/share/openstack-tripleo-heat-templates/environments/services-docker/octavia.yaml \
      -e octavia_timeouts.yaml

    This command only includes the files associated with Octavia; it varies based on your specific installation of OpenStack. See the OpenStack documentation for further information. For more information on customizing your Octavia installation, see installation of Octavia using Director.

    When leveraging Kuryr SDN, the Overcloud installation requires the Neutron trunk extension. This is available by default on director deployments. Use the openvswitch firewall instead of the default ovs-hybrid when the Neutron backend is ML2/OVS. There is no need for modifications if the backend is ML2/OVN.

The Octavia OVN Driver

Octavia supports multiple provider drivers through the Octavia API.

To see all available Octavia provider drivers, on a command line, enter:

$ openstack loadbalancer provider list
Example output
+---------+-------------------------------------------------+
| name    | description                                     |
+---------+-------------------------------------------------+
| amphora | The Octavia Amphora driver.                     |
| octavia | Deprecated alias of the Octavia Amphora driver. |
| ovn     | Octavia OVN driver.                             |
+---------+-------------------------------------------------+

Beginning with OpenStack version 16, the Octavia OVN provider driver (ovn) is supported on OKD on OpenStack deployments.

ovn is an integration driver for the load balancing that Octavia and OVN provide. It supports basic load balancing capabilities, and is based on OpenFlow rules. The driver is automatically enabled in Octavia by Director on deployments that use OVN Neutron ML2.

The Amphora provider driver is the default driver. If ovn is enabled, however, Kuryr uses it.

If Kuryr uses ovn instead of Amphora, it offers the following benefits:

  • Decreased resource requirements. Kuryr does not require a load balancer VM for each service.

  • Reduced network latency.

  • Increased service creation speed by using OpenFlow rules instead of a VM for each service.

  • Distributed load balancing actions across all nodes instead of centralized on Amphora VMs.

You can configure your cluster to use the Octavia OVN driver after your OpenStack cloud is upgraded from version 13 to version 16.

Known limitations of installing with Kuryr

Using OKD with Kuryr SDN has several known limitations.

OpenStack general limitations

Using OKD with Kuryr SDN has several limitations that apply to all versions and environments:

  • Service objects with the NodePort type are not supported.

  • Clusters that use the OVN Octavia provider driver support Service objects for which the .spec.selector property is unspecified only if the .subsets.addresses property of the Endpoints object includes the subnet of the nodes or pods.

  • If the subnet on which machines are created is not connected to a router, or if the subnet is connected, but the router has no external gateway set, Kuryr cannot create floating IPs for Service objects with type LoadBalancer.

  • Configuring the sessionAffinity=ClientIP property on Service objects does not have an effect. Kuryr does not support this setting.

OpenStack version limitations

Using OKD with Kuryr SDN has several limitations that depend on the OpenStack version.

  • OpenStack versions before 16 use the default Octavia load balancer driver (Amphora). This driver requires that one Amphora load balancer VM is deployed per OKD service. Creating too many services can cause you to run out of resources.

    Deployments of later versions of OpenStack that have the OVN Octavia driver disabled also use the Amphora driver. They are subject to the same resource concerns as earlier versions of OpenStack.

  • Kuryr SDN does not support automatic unidling by a service.

OpenStack upgrade limitations

As a result of the OpenStack upgrade process, the Octavia API might be changed, and upgrades to the Amphora images that are used for load balancers might be required.

You can address API changes on an individual basis.

If the Amphora image is upgraded, the OpenStack operator can handle existing load balancer VMs in two ways:

  • Upgrade each VM by triggering a load balancer failover.

  • Leave responsibility for upgrading the VMs to users.

If the operator takes the first option, there might be short downtimes during failovers.

If the operator takes the second option, the existing load balancers will not support upgraded Octavia API features, like UDP listeners. In this case, users must recreate their Services to use these features.

Control plane machines

By default, the OKD installation process creates three control plane machines.

Each machine requires:

  • An instance from the OpenStack quota

  • A port from the OpenStack quota

  • A flavor with at least 16 GB memory and 4 vCPUs

  • At least 100 GB storage space from the OpenStack quota

Compute machines

By default, the OKD installation process creates three compute machines.

Each machine requires:

  • An instance from the OpenStack quota

  • A port from the OpenStack quota

  • A flavor with at least 8 GB memory and 2 vCPUs

  • At least 100 GB storage space from the OpenStack quota

Compute machines host the applications that you run on OKD; aim to run as many as you can.

Bootstrap machine

During installation, a bootstrap machine is temporarily provisioned to stand up the control plane. After the production control plane is ready, the bootstrap machine is deprovisioned.

The bootstrap machine requires:

  • An instance from the OpenStack quota

  • A port from the OpenStack quota

  • A flavor with at least 16 GB memory and 4 vCPUs

  • At least 100 GB storage space from the OpenStack quota

Load balancing requirements for user-provisioned infrastructure

Deployment with User-Managed Load Balancers is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.

For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.

Before you install OKD, you can provision your own API and application ingress load balancing infrastructure to use in place of the default, internal load balancing solution. In production scenarios, you can deploy the API and application Ingress load balancers separately so that you can scale the load balancer infrastructure for each in isolation.

If you want to deploy the API and application Ingress load balancers with a Fedora instance, you must purchase the Fedora subscription separately.

The load balancing infrastructure must meet the following requirements:

  1. API load balancer: Provides a common endpoint for users, both human and machine, to interact with and configure the platform. Configure the following conditions:

    • Layer 4 load balancing only. This can be referred to as Raw TCP or SSL Passthrough mode.

    • A stateless load balancing algorithm. The options vary based on the load balancer implementation.

    Do not configure session persistence for an API load balancer. Configuring session persistence for a Kubernetes API server might cause performance issues from excess application traffic for your OKD cluster and the Kubernetes API that runs inside the cluster.

    Configure the following ports on both the front and back of the load balancers:

    Table 2. API load balancer
    Port Back-end machines (pool members) Internal External Description

    6443

    Bootstrap and control plane. You remove the bootstrap machine from the load balancer after the bootstrap machine initializes the cluster control plane. You must configure the /readyz endpoint for the API server health check probe.

    X

    X

    Kubernetes API server

    22623

    Bootstrap and control plane. You remove the bootstrap machine from the load balancer after the bootstrap machine initializes the cluster control plane.

    X

    Machine config server

    The load balancer must be configured to take a maximum of 30 seconds from the time the API server turns off the /readyz endpoint to the removal of the API server instance from the pool. Within the time frame after /readyz returns an error or becomes healthy, the endpoint must have been removed or added. Probing every 5 or 10 seconds, with two successful requests to become healthy and three to become unhealthy, are well-tested values.

  2. Application Ingress load balancer: Provides an ingress point for application traffic flowing in from outside the cluster. A working configuration for the Ingress router is required for an OKD cluster.

    Configure the following conditions:

    • Layer 4 load balancing only. This can be referred to as Raw TCP or SSL Passthrough mode.

    • A connection-based or session-based persistence is recommended, based on the options available and types of applications that will be hosted on the platform.

    If the true IP address of the client can be seen by the application Ingress load balancer, enabling source IP-based session persistence can improve performance for applications that use end-to-end TLS encryption.

    Configure the following ports on both the front and back of the load balancers:

    Table 3. Application Ingress load balancer
    Port Back-end machines (pool members) Internal External Description

    443

    The machines that run the Ingress Controller pods, compute, or worker, by default.

    X

    X

    HTTPS traffic

    80

    The machines that run the Ingress Controller pods, compute, or worker, by default.

    X

    X

    HTTP traffic

    If you are deploying a three-node cluster with zero compute nodes, the Ingress Controller pods run on the control plane nodes. In three-node cluster deployments, you must configure your application Ingress load balancer to route HTTP and HTTPS traffic to the control plane nodes.

Example load balancer configuration for clusters that are deployed with user-managed load balancers

This section provides an example API and application Ingress load balancer configuration that meets the load balancing requirements for clusters that are deployed with user-managed load balancers. The sample is an /etc/haproxy/haproxy.cfg configuration for an haproxy load balancer. The example is not meant to provide advice for choosing one load balancing solution over another.

In the example, the same load balancer is used for the Kubernetes API and application ingress traffic. In production scenarios, you can deploy the API and application ingress load balancers separately so that you can scale the load balancer infrastructure for each in isolation.

If you are using haproxy as a load balancer and SELinux is set to enforcing, you must ensure that the haproxy service can bind to the configured TCP port by running setsebool -P haproxy_connect_any=1.

Sample API and application Ingress load balancer configuration
global
  log         127.0.0.1 local2
  pidfile     /var/run/haproxy.pid
  maxconn     4000
  daemon
defaults
  mode                    http
  log                     global
  option                  dontlognull
  option http-server-close
  option                  redispatch
  retries                 3
  timeout http-request    10s
  timeout queue           1m
  timeout connect         10s
  timeout client          1m
  timeout server          1m
  timeout http-keep-alive 10s
  timeout check           10s
  maxconn                 3000
listen api-server-6443 (1)
  bind *:6443
  mode tcp
  option  httpchk GET /readyz HTTP/1.0
  option  log-health-checks
  balance roundrobin
  server bootstrap bootstrap.ocp4.example.com:6443 verify none check check-ssl inter 10s fall 2 rise 3 backup (2)
  server master0 master0.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3
  server master1 master1.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3
  server master2 master2.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3
listen machine-config-server-22623 (3)
  bind *:22623
  mode tcp
  server bootstrap bootstrap.ocp4.example.com:22623 check inter 1s backup (2)
  server master0 master0.ocp4.example.com:22623 check inter 1s
  server master1 master1.ocp4.example.com:22623 check inter 1s
  server master2 master2.ocp4.example.com:22623 check inter 1s
listen ingress-router-443 (4)
  bind *:443
  mode tcp
  balance source
  server worker0 worker0.ocp4.example.com:443 check inter 1s
  server worker1 worker1.ocp4.example.com:443 check inter 1s
listen ingress-router-80 (5)
  bind *:80
  mode tcp
  balance source
  server worker0 worker0.ocp4.example.com:80 check inter 1s
  server worker1 worker1.ocp4.example.com:80 check inter 1s
1 Port 6443 handles the Kubernetes API traffic and points to the control plane machines.
2 The bootstrap entries must be in place before the OKD cluster installation and they must be removed after the bootstrap process is complete.
3 Port 22623 handles the machine config server traffic and points to the control plane machines.
4 Port 443 handles the HTTPS traffic and points to the machines that run the Ingress Controller pods. The Ingress Controller pods run on the compute machines by default.
5 Port 80 handles the HTTP traffic and points to the machines that run the Ingress Controller pods. The Ingress Controller pods run on the compute machines by default.

If you are deploying a three-node cluster with zero compute nodes, the Ingress Controller pods run on the control plane nodes. In three-node cluster deployments, you must configure your application Ingress load balancer to route HTTP and HTTPS traffic to the control plane nodes.

If you are using haproxy as a load balancer, you can check that the haproxy process is listening on ports 6443, 22623, 443, and 80 by running netstat -nltupe on the haproxy node.

Enabling Swift on OpenStack

Swift is operated by a user account with the swiftoperator role. Add the role to an account before you run the installation program.

If the OpenStack object storage service, commonly known as Swift, is available, OKD uses it as the image registry storage. If it is unavailable, the installation program relies on the OpenStack block storage service, commonly known as Cinder.

If Swift is present and you want to use it, you must enable access to it. If it is not present, or if you do not want to use it, skip this section.

OpenStack 17 sets the rgw_max_attr_size parameter of Ceph RGW to 256 characters. This setting causes issues with uploading container images to the OKD registry. You must set the value of rgw_max_attr_size to at least 1024 characters.

Before installation, check if your OpenStack deployment is affected by this problem. If it is, reconfigure Ceph RGW.

Prerequisites
  • You have a OpenStack administrator account on the target environment.

  • The Swift service is installed.

  • On Ceph RGW, the account in url option is enabled.

Procedure

To enable Swift on OpenStack:

  1. As an administrator in the OpenStack CLI, add the swiftoperator role to the account that will access Swift:

    $ openstack role add --user <user> --project <project> swiftoperator

Your OpenStack deployment can now use Swift for the image registry.

Verifying external network access

The OKD installation process requires external network access. You must provide an external network value to it, or deployment fails. Before you begin the process, verify that a network with the external router type exists in OpenStack.

Procedure
  1. Using the OpenStack CLI, verify the name and ID of the 'External' network:

    $ openstack network list --long -c ID -c Name -c "Router Type"
    Example output
    +--------------------------------------+----------------+-------------+
    | ID                                   | Name           | Router Type |
    +--------------------------------------+----------------+-------------+
    | 148a8023-62a7-4672-b018-003462f8d7dc | public_network | External    |
    +--------------------------------------+----------------+-------------+

A network with an external router type appears in the network list. If at least one does not, see Creating a default floating IP network and Creating a default provider network.

If the external network’s CIDR range overlaps one of the default network ranges, you must change the matching network ranges in the install-config.yaml file before you start the installation process.

The default network ranges are:

Network Range

machineNetwork

10.0.0.0/16

serviceNetwork

172.30.0.0/16

clusterNetwork

10.128.0.0/14

If the installation program finds multiple networks with the same name, it sets one of them at random. To avoid this behavior, create unique names for resources in OpenStack.

If the Neutron trunk service plugin is enabled, a trunk port is created by default. For more information, see Neutron trunk port.

Defining parameters for the installation program

The OKD installation program relies on a file that is called clouds.yaml. The file describes OpenStack configuration parameters, including the project name, log in information, and authorization service URLs.

Procedure
  1. Create the clouds.yaml file:

    • If your OpenStack distribution includes the Horizon web UI, generate a clouds.yaml file in it.

      Remember to add a password to the auth field. You can also keep secrets in a separate file from clouds.yaml.

    • If your OpenStack distribution does not include the Horizon web UI, or you do not want to use Horizon, create the file yourself. For detailed information about clouds.yaml, see Config files in the OpenStack documentation.

      clouds:
        shiftstack:
          auth:
            auth_url: http://10.10.14.42:5000/v3
            project_name: shiftstack
            username: <username>
            password: <password>
            user_domain_name: Default
            project_domain_name: Default
        dev-env:
          region_name: RegionOne
          auth:
            username: <username>
            password: <password>
            project_name: 'devonly'
            auth_url: 'https://10.10.14.22:5001/v2.0'
  2. If your OpenStack installation uses self-signed certificate authority (CA) certificates for endpoint authentication:

    1. Copy the certificate authority file to your machine.

    2. Add the cacerts key to the clouds.yaml file. The value must be an absolute, non-root-accessible path to the CA certificate:

      clouds:
        shiftstack:
          ...
          cacert: "/etc/pki/ca-trust/source/anchors/ca.crt.pem"

      After you run the installer with a custom CA certificate, you can update the certificate by editing the value of the ca-cert.pem key in the cloud-provider-config keymap. On a command line, run:

      $ oc edit configmap -n openshift-config cloud-provider-config
  3. Place the clouds.yaml file in one of the following locations:

    1. The value of the OS_CLIENT_CONFIG_FILE environment variable

    2. The current directory

    3. A Unix-specific user configuration directory, for example ~/.config/openstack/clouds.yaml

    4. A Unix-specific site configuration directory, for example /etc/openstack/clouds.yaml

      The installation program searches for clouds.yaml in that order.

Setting OpenStack Cloud Controller Manager options

Optionally, you can edit the OpenStack Cloud Controller Manager (CCM) configuration for your cluster. This configuration controls how OKD interacts with OpenStack.

For a complete list of configuration parameters, see the "OpenStack Cloud Controller Manager reference guide" page in the "Installing on OpenStack" documentation.

Procedure
  1. If you have not already generated manifest files for your cluster, generate them by running the following command:

    $ openshift-install --dir <destination_directory> create manifests
  2. In a text editor, open the cloud-provider configuration manifest file. For example:

    $ vi openshift/manifests/cloud-provider-config.yaml
  3. Modify the options according to the CCM reference guide.

    Configuring Octavia for load balancing is a common case for clusters that do not use Kuryr. For example:

    #...
    [LoadBalancer]
    use-octavia=true (1)
    lb-provider = "amphora" (2)
    floating-network-id="d3deb660-4190-40a3-91f1-37326fe6ec4a" (3)
    create-monitor = True (4)
    monitor-delay = 10s (5)
    monitor-timeout = 10s (6)
    monitor-max-retries = 1 (7)
    #...
    1 This property enables Octavia integration.
    2 This property sets the Octavia provider that your load balancer uses. It accepts "ovn" or "amphora" as values. If you choose to use OVN, you must also set lb-method to SOURCE_IP_PORT.
    3 This property is required if you want to use multiple external networks with your cluster. The cloud provider creates floating IP addresses on the network that is specified here.
    4 This property controls whether the cloud provider creates health monitors for Octavia load balancers. Set the value to True to create health monitors. As of OpenStack 16.2, this feature is only available for the Amphora provider.
    5 This property sets the frequency with which endpoints are monitored. The value must be in the time.ParseDuration() format. This property is required if the value of the create-monitor property is True.
    6 This property sets the time that monitoring requests are open before timing out. The value must be in the time.ParseDuration() format. This property is required if the value of the create-monitor property is True.
    7 This property defines how many successful monitoring requests are required before a load balancer is marked as online. The value must be an integer. This property is required if the value of the create-monitor property is True.

    Prior to saving your changes, verify that the file is structured correctly. Clusters might fail if properties are not placed in the appropriate section.

    You must set the value of the create-monitor property to True if you use services that have the value of the .spec.externalTrafficPolicy property set to Local. The OVN Octavia provider in OpenStack 16.2 does not support health monitors. Therefore, services that have ETP parameter values set to Local might not respond when the lb-provider value is set to "ovn".

    For installations that use Kuryr, Kuryr handles relevant services. There is no need to configure Octavia load balancing in the cloud provider.

  4. Save the changes to the file and proceed with installation.

    You can update your cloud provider configuration after you run the installer. On a command line, run:

    $ oc edit configmap -n openshift-config cloud-provider-config

    After you save your changes, your cluster will take some time to reconfigure itself. The process is complete if none of your nodes have a SchedulingDisabled status.

Obtaining the installation program

Before you install OKD, download the installation file on the host you are using for installation.

Prerequisites
  • You have a computer that runs Linux or macOS, with 500 MB of local disk space.

Procedure
  1. Download installer from https://github.com/openshift/okd/releases

    The installation program creates several files on the computer that you use to install your cluster. You must keep the installation program and the files that the installation program creates after you finish installing the cluster. Both files are required to delete the cluster.

    Deleting the files created by the installation program does not remove your cluster, even if the cluster failed during installation. To remove your cluster, complete the OKD uninstallation procedures for your specific cloud provider.

  2. Extract the installation program. For example, on a computer that uses a Linux operating system, run the following command:

    $ tar -xvf openshift-install-linux.tar.gz
  3. Download your installation pull secret from the Red Hat OpenShift Cluster Manager. This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OKD components.

    Using a pull secret from the Red Hat OpenShift Cluster Manager is not required. You can use a pull secret for another private registry. Or, if you do not need the cluster to pull images from a private registry, you can use {"auths":{"fake":{"auth":"aWQ6cGFzcwo="}}} as the pull secret when prompted during the installation.

    • Red Hat Operators are not available.

    • The Telemetry and Insights operators do not send data to Red Hat.

    • Content from the Red Hat Container Catalog registry, such as image streams and Operators, are not available.

Creating the installation configuration file

You can customize the OKD cluster you install on OpenStack.

Prerequisites
  • Obtain the OKD installation program and the pull secret for your cluster.

  • Obtain service principal permissions at the subscription level.

Procedure
  1. Create the install-config.yaml file.

    1. Change to the directory that contains the installation program and run the following command:

      $ ./openshift-install create install-config --dir <installation_directory> (1)
      1 For <installation_directory>, specify the directory name to store the files that the installation program creates.

      When specifying the directory:

      • Verify that the directory has the execute permission. This permission is required to run Terraform binaries under the installation directory.

      • Use an empty directory. Some installation assets, such as bootstrap X.509 certificates, have short expiration intervals, therefore you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OKD version.

        Always delete the ~/.powervs directory to avoid reusing a stale configuration. Run the following command:

        $ rm -rf ~/.powervs
    2. At the prompts, provide the configuration details for your cloud:

      1. Optional: Select an SSH key to use to access your cluster machines.

        For production OKD clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses.

      2. Select openstack as the platform to target.

      3. Specify the OpenStack external network name to use for installing the cluster.

      4. Specify the floating IP address to use for external access to the OpenShift API.

      5. Specify a OpenStack flavor with at least 16 GB RAM to use for control plane nodes and 8 GB RAM for compute nodes.

      6. Select the base domain to deploy the cluster to. All DNS records will be sub-domains of this base and will also include the cluster name.

      7. Enter a name for your cluster. The name must be 14 or fewer characters long.

      8. Paste the pull secret from the Red Hat OpenShift Cluster Manager. This field is optional.

  2. Modify the install-config.yaml file. You can find more information about the available parameters in the "Installation configuration parameters" section.

  3. Back up the install-config.yaml file so that you can use it to install multiple clusters.

    The install-config.yaml file is consumed during the installation process. If you want to reuse the file, you must back it up now.

Configuring the cluster-wide proxy during installation

Production environments can deny direct access to the internet and instead have an HTTP or HTTPS proxy available. You can configure a new OKD cluster to use a proxy by configuring the proxy settings in the install-config.yaml file.

Kuryr installations default to HTTP proxies.

Prerequisites
  • For Kuryr installations on restricted networks that use the Proxy object, the proxy must be able to reply to the router that the cluster uses. To add a static route for the proxy configuration, from a command line as the root user, enter:

    $ ip route add <cluster_network_cidr> via <installer_subnet_gateway>
  • The restricted subnet must have a gateway that is defined and available to be linked to the Router resource that Kuryr creates.

  • You have an existing install-config.yaml file.

  • You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the Proxy object’s spec.noProxy field to bypass the proxy if necessary.

    The Proxy object status.noProxy field is populated with the values of the networking.machineNetwork[].cidr, networking.clusterNetwork[].cidr, and networking.serviceNetwork[] fields from your installation configuration.

    For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and OpenStack, the Proxy object status.noProxy field is also populated with the instance metadata endpoint (169.254.169.254).

Procedure
  1. Edit your install-config.yaml file and add the proxy settings. For example:

    apiVersion: v1
    baseDomain: my.domain.com
    proxy:
      httpProxy: http://<username>:<pswd>@<ip>:<port> (1)
      httpsProxy: https://<username>:<pswd>@<ip>:<port> (2)
      noProxy: example.com (3)
    additionalTrustBundle: | (4)
        -----BEGIN CERTIFICATE-----
        <MY_TRUSTED_CA_CERT>
        -----END CERTIFICATE-----
    additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> (5)
    1 A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be http.
    2 A proxy URL to use for creating HTTPS connections outside the cluster.
    3 A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com, but not y.com. Use * to bypass the proxy for all destinations.
    4 If provided, the installation program generates a config map that is named user-ca-bundle in the openshift-config namespace that contains one or more additional CA certificates that are required for proxying HTTPS connections. The Cluster Network Operator then creates a trusted-ca-bundle config map that merges these contents with the Fedora CoreOS (FCOS) trust bundle, and this config map is referenced in the trustedCA field of the Proxy object. The additionalTrustBundle field is required unless the proxy’s identity certificate is signed by an authority from the FCOS trust bundle.
    5 Optional: The policy to determine the configuration of the Proxy object to reference the user-ca-bundle config map in the trustedCA field. The allowed values are Proxyonly and Always. Use Proxyonly to reference the user-ca-bundle config map only when http/https proxy is configured. Use Always to always reference the user-ca-bundle config map. The default value is Proxyonly.

    The installation program does not support the proxy readinessEndpoints field.

    If the installer times out, restart and then complete the deployment by using the wait-for command of the installer. For example:

    $ ./openshift-install wait-for install-complete --log-level debug
  2. Save the file and reference it when installing OKD.

The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still created, but it will have a nil spec.

Only the Proxy object named cluster is supported, and no additional proxies can be created.

Installation configuration parameters

Before you deploy an OKD cluster, you provide parameter values to describe your account on the cloud platform that hosts your cluster and optionally customize your cluster’s platform. When you create the install-config.yaml installation configuration file, you provide values for the required parameters through the command line. If you customize your cluster, you can modify the install-config.yaml file to provide more details about the platform.

After installation, you cannot modify these parameters in the install-config.yaml file.

Required configuration parameters

Required installation configuration parameters are described in the following table:

Table 4. Required parameters
Parameter Description Values

apiVersion

The API version for the install-config.yaml content. The current version is v1. The installation program may also support older API versions.

String

baseDomain

The base domain of your cloud provider. The base domain is used to create routes to your OKD cluster components. The full DNS name for your cluster is a combination of the baseDomain and metadata.name parameter values that uses the <metadata.name>.<baseDomain> format.

A fully-qualified domain or subdomain name, such as example.com.

metadata

Kubernetes resource ObjectMeta, from which only the name parameter is consumed.

Object

metadata.name

The name of the cluster. DNS records for the cluster are all subdomains of {{.metadata.name}}.{{.baseDomain}}.

String of lowercase letters, hyphens (-), and periods (.), such as dev. The string must be 14 characters or fewer long.

platform

The configuration for the specific platform upon which to perform the installation: alibabacloud, aws, baremetal, azure, gcp, ibmcloud, nutanix, openstack, ovirt, powervs, vsphere, or {}. For additional information about platform.<platform> parameters, consult the table for your specific platform that follows.

Object

Network configuration parameters

You can customize your installation configuration based on the requirements of your existing network infrastructure. For example, you can expand the IP address block for the cluster network or provide different IP address blocks than the defaults.

Only IPv4 addresses are supported.

Globalnet is not supported with Red Hat OpenShift Data Foundation disaster recovery solutions. For regional disaster recovery scenarios, ensure that you use a nonoverlapping range of private IP addresses for the cluster and service networks in each cluster.

Table 5. Network parameters
Parameter Description Values

networking

The configuration for the cluster network.

Object

You cannot modify parameters specified by the networking object after installation.

networking.networkType

The Red Hat OpenShift Networking network plugin to install.

Either OpenShiftSDN or OVNKubernetes. The default value is OVNKubernetes.

networking.clusterNetwork

The IP address blocks for pods.

The default value is 10.128.0.0/14 with a host prefix of /23.

If you specify multiple IP address blocks, the blocks must not overlap.

An array of objects. For example:

networking:
  clusterNetwork:
  - cidr: 10.128.0.0/14
    hostPrefix: 23

networking.clusterNetwork.cidr

Required if you use networking.clusterNetwork. An IP address block.

An IPv4 network.

An IP address block in Classless Inter-Domain Routing (CIDR) notation. The prefix length for an IPv4 block is between 0 and 32.

networking.clusterNetwork.hostPrefix

The subnet prefix length to assign to each individual node. For example, if hostPrefix is set to 23 then each node is assigned a /23 subnet out of the given cidr. A hostPrefix value of 23 provides 510 (2^(32 - 23) - 2) pod IP addresses.

A subnet prefix.

The default value is 23.

networking.serviceNetwork

The IP address block for services. The default value is 172.30.0.0/16.

The OpenShift SDN and OVN-Kubernetes network plugins support only a single IP address block for the service network.

An array with an IP address block in CIDR format. For example:

networking:
  serviceNetwork:
   - 172.30.0.0/16

networking.machineNetwork

The IP address blocks for machines.

If you specify multiple IP address blocks, the blocks must not overlap.

An array of objects. For example:

networking:
  machineNetwork:
  - cidr: 10.0.0.0/16

networking.machineNetwork.cidr

Required if you use networking.machineNetwork. An IP address block. The default value is 10.0.0.0/16 for all platforms other than libvirt and IBM Power Virtual Server. For libvirt, the default value is 192.168.126.0/24. For IBM Power Virtual Server, the default value is 192.168.0.0/24.

An IP network block in CIDR notation.

For example, 10.0.0.0/16.

Set the networking.machineNetwork to match the CIDR that the preferred NIC resides in.

Optional configuration parameters

Optional installation configuration parameters are described in the following table:

Table 6. Optional parameters
Parameter Description Values

additionalTrustBundle

A PEM-encoded X.509 certificate bundle that is added to the nodes' trusted certificate store. This trust bundle may also be used when a proxy has been configured.

String

capabilities

Controls the installation of optional core cluster components. You can reduce the footprint of your OKD cluster by disabling optional components. For more information, see the "Cluster capabilities" page in Installing.

String array

capabilities.baselineCapabilitySet

Selects an initial set of optional capabilities to enable. Valid values are None, v4.11, v4.12 and vCurrent. The default value is vCurrent.

String

capabilities.additionalEnabledCapabilities

Extends the set of optional capabilities beyond what you specify in baselineCapabilitySet. You may specify multiple capabilities in this parameter.

String array

cpuPartitioningMode

Enables workload partitioning, which isolates OKD services, cluster management workloads, and infrastructure pods to run on a reserved set of CPUs. Workload partitioning can only be enabled during installation and cannot be disabled after installation. While this field enables workload partitioning, it does not configure workloads to use specific CPUs. For more information, see the Workload partitioning page in the Scalability and Performance section.

None or AllNodes. None is the default value.

compute

The configuration for the machines that comprise the compute nodes.

Array of MachinePool objects.

compute.architecture

Determines the instruction set architecture of the machines in the pool. Currently, clusters with varied architectures are not supported. All pools must specify the same architecture. Valid values are amd64 (the default).

String

compute: hyperthreading:

Whether to enable or disable simultaneous multithreading, or hyperthreading, on compute machines. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores.

If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance.

Enabled or Disabled

compute.name

Required if you use compute. The name of the machine pool.

worker

compute.platform

Required if you use compute. Use this parameter to specify the cloud provider to host the worker machines. This parameter value must match the controlPlane.platform parameter value.

alibabacloud, aws, azure, gcp, ibmcloud, nutanix, openstack, ovirt, powervs, vsphere, or {}

compute.replicas

The number of compute machines, which are also known as worker machines, to provision.

A positive integer greater than or equal to 2. The default value is 3.

featureSet

Enables the cluster for a feature set. A feature set is a collection of OKD features that are not enabled by default. For more information about enabling a feature set during installation, see "Enabling features using feature gates".

String. The name of the feature set to enable, such as TechPreviewNoUpgrade.

controlPlane

The configuration for the machines that comprise the control plane.

Array of MachinePool objects.

controlPlane.architecture

Determines the instruction set architecture of the machines in the pool. Currently, clusters with varied architectures are not supported. All pools must specify the same architecture. Valid values are amd64.

String

controlPlane: hyperthreading:

Whether to enable or disable simultaneous multithreading, or hyperthreading, on control plane machines. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores.

If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance.

Enabled or Disabled

controlPlane.name

Required if you use controlPlane. The name of the machine pool.

master

controlPlane.platform

Required if you use controlPlane. Use this parameter to specify the cloud provider that hosts the control plane machines. This parameter value must match the compute.platform parameter value.

alibabacloud, aws, azure, gcp, ibmcloud, nutanix, openstack, ovirt, powervs, vsphere, or {}

controlPlane.replicas

The number of control plane machines to provision.

The only supported value is 3, which is the default value.

credentialsMode

The Cloud Credential Operator (CCO) mode. If no mode is specified, the CCO dynamically tries to determine the capabilities of the provided credentials, with a preference for mint mode on the platforms where multiple modes are supported.

Not all CCO modes are supported for all cloud providers. For more information about CCO modes, see the Cloud Credential Operator entry in the Cluster Operators reference content.

If your AWS account has service control policies (SCP) enabled, you must configure the credentialsMode parameter to Mint, Passthrough or Manual.

Mint, Passthrough, Manual or an empty string ("").

imageContentSources

Sources and repositories for the release-image content.

Array of objects. Includes a source and, optionally, mirrors, as described in the following rows of this table.

imageContentSources.source

Required if you use imageContentSources. Specify the repository that users refer to, for example, in image pull specifications.

String

imageContentSources.mirrors

Specify one or more repositories that may also contain the same images.

Array of strings

publish

How to publish or expose the user-facing endpoints of your cluster, such as the Kubernetes API, OpenShift routes.

Internal or External. The default value is External.

Setting this field to Internal is not supported on non-cloud platforms.

If the value of the field is set to Internal, the cluster will become non-functional. For more information, refer to BZ#1953035.

sshKey

The SSH key to authenticate access to your cluster machines.

For production OKD clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses.

For example, sshKey: ssh-ed25519 AAAA...

  1. Not all CCO modes are supported for all cloud providers. For more information about CCO modes, see the "Managing cloud provider credentials" entry in the Authentication and authorization content.

Additional OpenStack configuration parameters

Additional OpenStack configuration parameters are described in the following table:

Table 7. Additional OpenStack parameters
Parameter Description Values

compute.platform.openstack.rootVolume.size

For compute machines, the size in gigabytes of the root volume. If you do not set this value, machines use ephemeral storage.

Integer, for example 30.

compute.platform.openstack.rootVolume.type

For compute machines, the root volume’s type.

String, for example performance.

controlPlane.platform.openstack.rootVolume.size

For control plane machines, the size in gigabytes of the root volume. If you do not set this value, machines use ephemeral storage.

Integer, for example 30.

controlPlane.platform.openstack.rootVolume.type

For control plane machines, the root volume’s type.

String, for example performance.

platform.openstack.cloud

The name of the OpenStack cloud to use from the list of clouds in the clouds.yaml file.

String, for example MyCloud.

platform.openstack.externalNetwork

The OpenStack external network name to be used for installation.

String, for example external.

platform.openstack.computeFlavor

The OpenStack flavor to use for control plane and compute machines.

This property is deprecated. To use a flavor as the default for all machine pools, add it as the value of the type key in the platform.openstack.defaultMachinePlatform property. You can also set a flavor value for each machine pool individually.

String, for example m1.xlarge.

Optional OpenStack configuration parameters

Optional OpenStack configuration parameters are described in the following table:

Table 8. Optional OpenStack parameters
Parameter Description Values

compute.platform.openstack.additionalNetworkIDs

Additional networks that are associated with compute machines. Allowed address pairs are not created for additional networks.

A list of one or more UUIDs as strings. For example, fa806b2f-ac49-4bce-b9db-124bc64209bf.

compute.platform.openstack.additionalSecurityGroupIDs

Additional security groups that are associated with compute machines.

A list of one or more UUIDs as strings. For example, 7ee219f3-d2e9-48a1-96c2-e7429f1b0da7.

compute.platform.openstack.zones

OpenStack Compute (Nova) availability zones (AZs) to install machines on. If this parameter is not set, the installation program relies on the default settings for Nova that the OpenStack administrator configured.

On clusters that use Kuryr, OpenStack Octavia does not support availability zones. Load balancers and, if you are using the Amphora provider driver, OKD services that rely on Amphora VMs, are not created according to the value of this property.

A list of strings. For example, ["zone-1", "zone-2"].

compute.platform.openstack.rootVolume.zones

For compute machines, the availability zone to install root volumes on. If you do not set a value for this parameter, the installation program selects the default availability zone.

A list of strings, for example ["zone-1", "zone-2"].

compute.platform.openstack.serverGroupPolicy

Server group policy to apply to the group that will contain the compute machines in the pool. You cannot change server group policies or affiliations after creation. Supported options include anti-affinity, soft-affinity, and soft-anti-affinity. The default value is soft-anti-affinity.

An affinity policy prevents migrations and therefore affects OpenStack upgrades. The affinity policy is not supported.

If you use a strict anti-affinity policy, an additional OpenStack host is required during instance migration.

A server group policy to apply to the machine pool. For example, soft-affinity.

controlPlane.platform.openstack.additionalNetworkIDs

Additional networks that are associated with control plane machines. Allowed address pairs are not created for additional networks.

Additional networks that are attached to a control plane machine are also attached to the bootstrap node.

A list of one or more UUIDs as strings. For example, fa806b2f-ac49-4bce-b9db-124bc64209bf.

controlPlane.platform.openstack.additionalSecurityGroupIDs

Additional security groups that are associated with control plane machines.

A list of one or more UUIDs as strings. For example, 7ee219f3-d2e9-48a1-96c2-e7429f1b0da7.

controlPlane.platform.openstack.zones

OpenStack Compute (Nova) availability zones (AZs) to install machines on. If this parameter is not set, the installation program relies on the default settings for Nova that the OpenStack administrator configured.

On clusters that use Kuryr, OpenStack Octavia does not support availability zones. Load balancers and, if you are using the Amphora provider driver, OKD services that rely on Amphora VMs, are not created according to the value of this property.

A list of strings. For example, ["zone-1", "zone-2"].

controlPlane.platform.openstack.rootVolume.zones

For control plane machines, the availability zone to install root volumes on. If you do not set this value, the installation program selects the default availability zone.

A list of strings, for example ["zone-1", "zone-2"].

controlPlane.platform.openstack.serverGroupPolicy

Server group policy to apply to the group that will contain the control plane machines in the pool. You cannot change server group policies or affiliations after creation. Supported options include anti-affinity, soft-affinity, and soft-anti-affinity. The default value is soft-anti-affinity.

An affinity policy prevents migrations, and therefore affects OpenStack upgrades. The affinity policy is not supported.

If you use a strict anti-affinity policy, an additional OpenStack host is required during instance migration.

A server group policy to apply to the machine pool. For example, soft-affinity.

platform.openstack.clusterOSImage

The location from which the installation program downloads the FCOS image.

You must set this parameter to perform an installation in a restricted network.

An HTTP or HTTPS URL, optionally with an SHA-256 checksum.

For example, http://mirror.example.com/images/rhcos-43.81.201912131630.0-openstack.x86_64.qcow2.gz?sha256=ffebbd68e8a1f2a245ca19522c16c86f67f9ac8e4e0c1f0a812b068b16f7265d. The value can also be the name of an existing Glance image, for example my-rhcos.

platform.openstack.clusterOSImageProperties

Properties to add to the installer-uploaded ClusterOSImage in Glance. This property is ignored if platform.openstack.clusterOSImage is set to an existing Glance image.

You can use this property to exceed the default persistent volume (PV) limit for OpenStack of 26 PVs per node. To exceed the limit, set the hw_scsi_model property value to virtio-scsi and the hw_disk_bus value to scsi.

You can also use this property to enable the QEMU guest agent by including the hw_qemu_guest_agent property with a value of yes.

A list of key-value string pairs. For example, ["hw_scsi_model": "virtio-scsi", "hw_disk_bus": "scsi"].

platform.openstack.defaultMachinePlatform

The default machine pool platform configuration.

{
   "type": "ml.large",
   "rootVolume": {
      "size": 30,
      "type": "performance"
   }
}

platform.openstack.ingressFloatingIP

An existing floating IP address to associate with the Ingress port. To use this property, you must also define the platform.openstack.externalNetwork property.

An IP address, for example 128.0.0.1.

platform.openstack.apiFloatingIP

An existing floating IP address to associate with the API load balancer. To use this property, you must also define the platform.openstack.externalNetwork property.

An IP address, for example 128.0.0.1.

platform.openstack.externalDNS

IP addresses for external DNS servers that cluster instances use for DNS resolution.

A list of IP addresses as strings. For example, ["8.8.8.8", "192.168.1.12"].

platform.openstack.loadbalancer

Whether or not to use the default, internal load balancer. If the value is set to UserManaged, this default load balancer is disabled so that you can deploy a cluster that uses an external, user-managed load balancer. If the parameter is not set, or if the value is OpenShiftManagedDefault, the cluster uses the default load balancer.

UserManaged or OpenShiftManagedDefault.

platform.openstack.machinesSubnet

The UUID of a OpenStack subnet that the cluster’s nodes use. Nodes and virtual IP (VIP) ports are created on this subnet.

The first item in networking.machineNetwork must match the value of machinesSubnet.

If you deploy to a custom subnet, you cannot specify an external DNS server to the OKD installer. Instead, add DNS to the subnet in OpenStack.

A UUID as a string. For example, fa806b2f-ac49-4bce-b9db-124bc64209bf.

OpenStack parameters for failure domains

OpenStack failure domains is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.

For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.

OpenStack deployments do not have a single implementation of failure domains. Instead, availability zones are defined individually for each service, such as the compute service, Nova; the networking service, Neutron; and the storage service, Cinder.

Beginning with OKD 4.13, there is a unified definition of failure domains for OpenStack deployments that covers all supported availability zone types. You can use failure domains to control related aspects of Nova, Neutron, and Cinder configurations from a single place.

In OpenStack, a port describes a network connection and maps to an interface inside a compute machine. A port also:

  • Is defined by a network or by one more or subnets

  • Connects a machine to one or more subnets

Failure domains group the services of your deployment by using ports. If you use failure domains, each machine connects to:

  • The portTarget object with the ID control-plane while that object exists.

  • All non-control-plane portTarget objects within its own failure domain.

  • All networks in the machine pool’s additionalNetworkIDs list.

To configure failure domains for a machine pool, edit availability zone and port target parameters under controlPlane.platform.openstack.failureDomains.

Table 9. OpenStack parameters for failure domains
Parameter Description Values

platform.openstack.failuredomains.computeAvailabilityZone

An availability zone for the server. If not specified, the cluster default is used.

The name of the availability zone. For example, nova-1.

platform.openstack.failuredomains.storageAvailabilityZone

An availability zone for the root volume. If not specified, the cluster default is used.

The name of the availability zone. For example, cinder-1.

platform.openstack.failuredomains.portTargets

A list of portTarget objects, each of which defines a network connection to attach to machines within a failure domain.

A list of portTarget objects.

platform.openstack.failuredomains.portTargets.portTarget.id

The ID of an individual port target. To select that port target as the first network for machines, set the value of this parameter to control-plane. If this parameter has a different value, it is ignored.

control-plane or an arbitrary string.

platform.openstack.failuredomains.portTargets.portTarget.network

Required. The name or ID of the network to attach to machines in the failure domain.

A network object that contains either a name or UUID. For example:

network:
  id: 8db6a48e-375b-4caa-b20b-5b9a7218bfe6

or:

network:
  name: my-network-1

platform.openstack.failuredomains.portTargets.portTarget.fixedIPs

Subnets to allocate fixed IP addresses to. These subnets must exist within the same network as the port.

A list of subnet objects.

You cannot combine zone fields and failure domains. If you want to use failure domains, the controlPlane.zone and controlPlane.rootVolume.zone fields must be left unset.

Custom subnets in OpenStack deployments

Optionally, you can deploy a cluster on a OpenStack subnet of your choice. The subnet’s GUID is passed as the value of platform.openstack.machinesSubnet in the install-config.yaml file.

This subnet is used as the cluster’s primary subnet. By default, nodes and ports are created on it. You can create nodes and ports on a different OpenStack subnet by setting the value of the platform.openstack.machinesSubnet property to the subnet’s UUID.

Before you run the OKD installer with a custom subnet, verify that your configuration meets the following requirements:

  • The subnet that is used by platform.openstack.machinesSubnet has DHCP enabled.

  • The CIDR of platform.openstack.machinesSubnet matches the CIDR of networking.machineNetwork.

  • The installation program user has permission to create ports on this network, including ports with fixed IP addresses.

Clusters that use custom subnets have the following limitations:

  • If you plan to install a cluster that uses floating IP addresses, the platform.openstack.machinesSubnet subnet must be attached to a router that is connected to the externalNetwork network.

  • If the platform.openstack.machinesSubnet value is set in the install-config.yaml file, the installation program does not create a private network or subnet for your OpenStack machines.

  • You cannot use the platform.openstack.externalDNS property at the same time as a custom subnet. To add DNS to a cluster that uses a custom subnet, configure DNS on the OpenStack network.

By default, the API VIP takes x.x.x.5 and the Ingress VIP takes x.x.x.7 from your network’s CIDR block. To override these default values, set values for platform.openstack.apiVIPs and platform.openstack.ingressVIPs that are outside of the DHCP allocation pool.

The CIDR ranges for networks are not adjustable after cluster installation. Red Hat does not provide direct guidance on determining the range during cluster installation because it requires careful consideration of the number of created pods per namespace.

Sample customized install-config.yaml file for OpenStack with Kuryr

To deploy with Kuryr SDN instead of the default OVN-Kubernetes network plugin, you must modify the install-config.yaml file to include Kuryr as the desired networking.networkType. This sample install-config.yaml demonstrates all of the possible OpenStack customization options.

This sample file is provided for reference only. You must obtain your install-config.yaml file by using the installation program.

apiVersion: v1
baseDomain: example.com
controlPlane:
  name: master
  platform: {}
  replicas: 3
compute:
- name: worker
  platform:
    openstack:
      type: ml.large
  replicas: 3
metadata:
  name: example
networking:
  clusterNetwork:
  - cidr: 10.128.0.0/14
    hostPrefix: 23
  machineNetwork:
  - cidr: 10.0.0.0/16
  serviceNetwork:
  - 172.30.0.0/16 (1)
  networkType: Kuryr (2)
platform:
  openstack:
    cloud: mycloud
    externalNetwork: external
    computeFlavor: m1.xlarge
    apiFloatingIP: 128.0.0.1
    trunkSupport: true (3)
    octaviaSupport: true (3)
pullSecret: '{"auths": ...}'
sshKey: ssh-ed25519 AAAA...
1 The Amphora Octavia driver creates two ports per load balancer. As a result, the service subnet that the installer creates is twice the size of the CIDR that is specified as the value of the serviceNetwork property. The larger range is required to prevent IP address conflicts.
2 The cluster network plugin to install. The supported values are Kuryr, OVNKubernetes, and OpenShiftSDN. The default value is OVNKubernetes.
3 Both trunkSupport and octaviaSupport are automatically discovered by the installer, so there is no need to set them. But if your environment does not meet both requirements, Kuryr SDN will not properly work. Trunks are needed to connect the pods to the OpenStack network and Octavia is required to create the OKD services.

Example installation configuration section that uses failure domains

OpenStack failure domains is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.

For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.

The following section of an install-config.yaml file demonstrates the use of failure domains in a cluster to deploy on OpenStack:

# ...
controlPlane:
  name: master
  platform:
    openstack:
      type: m1.large
      failureDomains:
      - computeAvailabilityZone: 'nova-1'
        storageAvailabilityZone: 'cinder-1'
        portTargets:
        - id: control-plane
          network:
            id: 8db6a48e-375b-4caa-b20b-5b9a7218bfe6
      - computeAvailabilityZone: 'nova-2'
        storageAvailabilityZone: 'cinder-2'
        portTargets:
        - id: control-plane
          network:
            id: 39a7b82a-a8a4-45a4-ba5a-288569a6edd1
      - computeAvailabilityZone: 'nova-3'
        storageAvailabilityZone: 'cinder-3'
        portTargets:
        - id: control-plane
          network:
            id: 8e4b4e0d-3865-4a9b-a769-559270271242
featureSet: TechPreviewNoUpgrade
# ...

Installation configuration for a cluster on OpenStack with a user-managed load balancer

Deployment on OpenStack with User-Managed Load Balancers is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.

For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.

The following example install-config.yaml file demonstrates how to configure a cluster that uses an external, user-managed load balancer rather than the default internal load balancer.

apiVersion: v1
baseDomain: mydomain.test
compute:
- name: worker
  platform:
    openstack:
      type: m1.xlarge
  replicas: 3
controlPlane:
  name: master
  platform:
    openstack:
      type: m1.xlarge
  replicas: 3
metadata:
  name: mycluster
networking:
  clusterNetwork:
  - cidr: 10.128.0.0/14
    hostPrefix: 23
  machineNetwork:
  - cidr: 192.168.10.0/24
platform:
  openstack:
    cloud: mycloud
    machinesSubnet: 8586bf1a-cc3c-4d40-bdf6-c243decc603a (1)
    apiVIPs:
    - 192.168.10.5
    ingressVIPs:
    - 192.168.10.7
    loadBalancer:
      type: UserManaged (2)
featureSet: TechPreviewNoUpgrade (3)
1 Regardless of which load balancer you use, the load balancer is deployed to this subnet.
2 The UserManaged value indicates that you are using an user-managed load balancer.
3 Because user-managed load balancers are in Technology Preview, you must include the TechPreviewNoUpgrade value to deploy a cluster that uses a user-managed load balancer.

Cluster deployment on OpenStack provider networks

You can deploy your OKD clusters on OpenStack with a primary network interface on a provider network. Provider networks are commonly used to give projects direct access to a public network that can be used to reach the internet. You can also share provider networks among projects as part of the network creation process.

OpenStack provider networks map directly to an existing physical network in the data center. A OpenStack administrator must create them.

In the following example, OKD workloads are connected to a data center by using a provider network:

A diagram that depicts four OpenShift workloads on OpenStack. Each workload is connected by its NIC to an external data center by using a provider network.

OKD clusters that are installed on provider networks do not require tenant networks or floating IP addresses. The installer does not create these resources during installation.

Example provider network types include flat (untagged) and VLAN (802.1Q tagged).

A cluster can support as many provider network connections as the network type allows. For example, VLAN networks typically support up to 4096 connections.

You can learn more about provider and tenant networks in the OpenStack documentation.

OpenStack provider network requirements for cluster installation

Before you install an OKD cluster, your OpenStack deployment and provider network must meet a number of conditions:

  • The OpenStack networking service (Neutron) is enabled and accessible through the OpenStack networking API.

  • The OpenStack networking service has the port security and allowed address pairs extensions enabled.

  • The provider network can be shared with other tenants.

    Use the openstack network create command with the --share flag to create a network that can be shared.

  • The OpenStack project that you use to install the cluster must own the provider network, as well as an appropriate subnet.

    To create a network for a project that is named "openshift," enter the following command
    $ openstack network create --project openshift
    To create a subnet for a project that is named "openshift," enter the following command
    $ openstack subnet create --project openshift

    To learn more about creating networks on OpenStack, read the provider networks documentation.

    If the cluster is owned by the admin user, you must run the installer as that user to create ports on the network.

    Provider networks must be owned by the OpenStack project that is used to create the cluster. If they are not, the OpenStack Compute service (Nova) cannot request a port from that network.

  • Verify that the provider network can reach the OpenStack metadata service IP address, which is 169.254.169.254 by default.

    Depending on your OpenStack SDN and networking service configuration, you might need to provide the route when you create the subnet. For example:

    $ openstack subnet create --dhcp --host-route destination=169.254.169.254/32,gateway=192.0.2.2 ...
  • Optional: To secure the network, create role-based access control (RBAC) rules that limit network access to a single project.

Deploying a cluster that has a primary interface on a provider network

You can deploy an OKD cluster that has its primary network interface on an OpenStack provider network.

Prerequisites
  • Your OpenStack deployment is configured as described by "OpenStack provider network requirements for cluster installation".

Procedure
  1. In a text editor, open the install-config.yaml file.

  2. Set the value of the platform.openstack.apiVIPs property to the IP address for the API VIP.

  3. Set the value of the platform.openstack.ingressVIPs property to the IP address for the Ingress VIP.

  4. Set the value of the platform.openstack.machinesSubnet property to the UUID of the provider network subnet.

  5. Set the value of the networking.machineNetwork.cidr property to the CIDR block of the provider network subnet.

The platform.openstack.apiVIPs and platform.openstack.ingressVIPs properties must both be unassigned IP addresses from the networking.machineNetwork.cidr block.

Section of an installation configuration file for a cluster that relies on a OpenStack provider network
        ...
        platform:
          openstack:
            apiVIPs: (1)
              - 192.0.2.13
            ingressVIPs: (1)
              - 192.0.2.23
            machinesSubnet: fa806b2f-ac49-4bce-b9db-124bc64209bf
            # ...
        networking:
          machineNetwork:
          - cidr: 192.0.2.0/24
1 In OKD 4.12 and later, the apiVIP and ingressVIP configuration settings are deprecated. Instead, use a list format to enter values in the apiVIPs and ingressVIPs configuration settings.

You cannot set the platform.openstack.externalNetwork or platform.openstack.externalDNS parameters while using a provider network for the primary network interface.

When you deploy the cluster, the installer uses the install-config.yaml file to deploy the cluster on the provider network.

You can add additional networks, including provider networks, to the platform.openstack.additionalNetworkIDs list.

After you deploy your cluster, you can attach pods to additional networks. For more information, see Understanding multiple networks.

Kuryr ports pools

A Kuryr ports pool maintains a number of ports on standby for pod creation.

Keeping ports on standby minimizes pod creation time. Without ports pools, Kuryr must explicitly request port creation or deletion whenever a pod is created or deleted.

The Neutron ports that Kuryr uses are created in subnets that are tied to namespaces. These pod ports are also added as subports to the primary port of OKD cluster nodes.

Because Kuryr keeps each namespace in a separate subnet, a separate ports pool is maintained for each namespace-worker pair.

Prior to installing a cluster, you can set the following parameters in the cluster-network-03-config.yml manifest file to configure ports pool behavior:

  • The enablePortPoolsPrepopulation parameter controls pool prepopulation, which forces Kuryr to add Neutron ports to the pools when the first pod that is configured to use the dedicated network for pods is created in a namespace. The default value is false.

  • The poolMinPorts parameter is the minimum number of free ports that are kept in the pool. The default value is 1.

  • The poolMaxPorts parameter is the maximum number of free ports that are kept in the pool. A value of 0 disables that upper bound. This is the default setting.

    If your OpenStack port quota is low, or you have a limited number of IP addresses on the pod network, consider setting this option to ensure that unneeded ports are deleted.

  • The poolBatchPorts parameter defines the maximum number of Neutron ports that can be created at once. The default value is 3.

Adjusting Kuryr ports pools during installation

During installation, you can configure how Kuryr manages OpenStack Neutron ports to control the speed and efficiency of pod creation.

Prerequisites
  • Create and modify the install-config.yaml file.

Procedure
  1. From a command line, create the manifest files:

    $ ./openshift-install create manifests --dir <installation_directory> (1)
    1 For <installation_directory>, specify the name of the directory that contains the install-config.yaml file for your cluster.
  2. Create a file that is named cluster-network-03-config.yml in the <installation_directory>/manifests/ directory:

    $ touch <installation_directory>/manifests/cluster-network-03-config.yml (1)
    1 For <installation_directory>, specify the directory name that contains the manifests/ directory for your cluster.

    After creating the file, several network configuration files are in the manifests/ directory, as shown:

    $ ls <installation_directory>/manifests/cluster-network-*
    Example output
    cluster-network-01-crd.yml
    cluster-network-02-config.yml
    cluster-network-03-config.yml
  3. Open the cluster-network-03-config.yml file in an editor, and enter a custom resource (CR) that describes the Cluster Network Operator configuration that you want:

    $ oc edit networks.operator.openshift.io cluster
  4. Edit the settings to meet your requirements. The following file is provided as an example:

    apiVersion: operator.openshift.io/v1
    kind: Network
    metadata:
      name: cluster
    spec:
      clusterNetwork:
      - cidr: 10.128.0.0/14
        hostPrefix: 23
      serviceNetwork:
      - 172.30.0.0/16
      defaultNetwork:
        type: Kuryr
        kuryrConfig:
          enablePortPoolsPrepopulation: false (1)
          poolMinPorts: 1 (2)
          poolBatchPorts: 3 (3)
          poolMaxPorts: 5 (4)
          openstackServiceNetwork: 172.30.0.0/15 (5)
    1 Set enablePortPoolsPrepopulation to true to make Kuryr create new Neutron ports when the first pod on the network for pods is created in a namespace. This setting raises the Neutron ports quota but can reduce the time that is required to spawn pods. The default value is false.
    2 Kuryr creates new ports for a pool if the number of free ports in that pool is lower than the value of poolMinPorts. The default value is 1.
    3 poolBatchPorts controls the number of new ports that are created if the number of free ports is lower than the value of poolMinPorts. The default value is 3.
    4 If the number of free ports in a pool is higher than the value of poolMaxPorts, Kuryr deletes them until the number matches that value. Setting this value to 0 disables this upper bound, preventing pools from shrinking. The default value is 0.
    5 The openStackServiceNetwork parameter defines the CIDR range of the network from which IP addresses are allocated to OpenStack Octavia’s LoadBalancers.

    If this parameter is used with the Amphora driver, Octavia takes two IP addresses from this network for each load balancer: one for OpenShift and the other for VRRP connections. Because these IP addresses are managed by OKD and Neutron respectively, they must come from different pools. Therefore, the value of openStackServiceNetwork must be at least twice the size of the value of serviceNetwork, and the value of serviceNetwork must overlap entirely with the range that is defined by openStackServiceNetwork.

    The CNO verifies that VRRP IP addresses that are taken from the range that is defined by this parameter do not overlap with the range that is defined by the serviceNetwork parameter.

    If this parameter is not set, the CNO uses an expanded value of serviceNetwork that is determined by decrementing the prefix size by 1.

  5. Save the cluster-network-03-config.yml file, and exit the text editor.

  6. Optional: Back up the manifests/cluster-network-03-config.yml file. The installation program deletes the manifests/ directory while creating the cluster.

Generating a key pair for cluster node SSH access

During an OKD installation, you can provide an SSH public key to the installation program. The key is passed to the Fedora CoreOS (FCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys list for the core user on each node, which enables password-less authentication.

After the key is passed to the nodes, you can use the key pair to SSH in to the FCOS nodes as the user core. To access the nodes through SSH, the private key identity must be managed by SSH for your local user.

If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather command also requires the SSH public key to be in place on the cluster nodes.

Do not skip this procedure in production environments, where disaster recovery and debugging is required.

On clusters running Fedora CoreOS (FCOS), the SSH keys specified in the Ignition config files are written to the /home/core/.ssh/authorized_keys.d/core file. However, the Machine Config Operator manages SSH keys in the /home/core/.ssh/authorized_keys file and configures sshd to ignore the /home/core/.ssh/authorized_keys.d/core file. As a result, newly provisioned OKD nodes are not accessible using SSH until the Machine Config Operator reconciles the machine configs with the authorized_keys file. After you can access the nodes using SSH, you can delete the /home/core/.ssh/authorized_keys.d/core file.

Procedure
  1. If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command:

    $ ssh-keygen -t ed25519 -N '' -f <path>/<file_name> (1)
    1 Specify the path and file name, such as ~/.ssh/id_ed25519, of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory.
  2. View the public SSH key:

    $ cat <path>/<file_name>.pub

    For example, run the following to view the ~/.ssh/id_ed25519.pub public key:

    $ cat ~/.ssh/id_ed25519.pub
  3. Add the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the ./openshift-install gather command.

    On some distributions, default SSH private key identities such as ~/.ssh/id_rsa and ~/.ssh/id_dsa are managed automatically.

    1. If the ssh-agent process is not already running for your local user, start it as a background task:

      $ eval "$(ssh-agent -s)"
      Example output
      Agent pid 31874
  4. Add your SSH private key to the ssh-agent:

    $ ssh-add <path>/<file_name> (1)
    1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_ed25519
    Example output
    Identity added: /home/<you>/<path>/<file_name> (<computer_name>)
Next steps
  • When you install OKD, provide the SSH public key to the installation program.

Enabling access to the environment

At deployment, all OKD machines are created in a OpenStack-tenant network. Therefore, they are not accessible directly in most OpenStack deployments.

You can configure OKD API and application access by using floating IP addresses (FIPs) during installation. You can also complete an installation without configuring FIPs, but the installer will not configure a way to reach the API or applications externally.

Enabling access with floating IP addresses

Create floating IP (FIP) addresses for external access to the OKD API and cluster applications.

Procedure
  1. Using the OpenStack CLI, create the API FIP:

    $ openstack floating ip create --description "API <cluster_name>.<base_domain>" <external_network>
  2. Using the OpenStack CLI, create the apps, or Ingress, FIP:

    $ openstack floating ip create --description "Ingress <cluster_name>.<base_domain>" <external_network>
  3. Add records that follow these patterns to your DNS server for the API and Ingress FIPs:

    api.<cluster_name>.<base_domain>.  IN  A  <API_FIP>
    *.apps.<cluster_name>.<base_domain>. IN  A <apps_FIP>

    If you do not control the DNS server, you can access the cluster by adding the cluster domain names such as the following to your /etc/hosts file:

    • <api_floating_ip> api.<cluster_name>.<base_domain>

    • <application_floating_ip> grafana-openshift-monitoring.apps.<cluster_name>.<base_domain>

    • <application_floating_ip> prometheus-k8s-openshift-monitoring.apps.<cluster_name>.<base_domain>

    • <application_floating_ip> oauth-openshift.apps.<cluster_name>.<base_domain>

    • <application_floating_ip> console-openshift-console.apps.<cluster_name>.<base_domain>

    • application_floating_ip integrated-oauth-server-openshift-authentication.apps.<cluster_name>.<base_domain>

    The cluster domain names in the /etc/hosts file grant access to the web console and the monitoring interface of your cluster locally. You can also use the kubectl or oc. You can access the user applications by using the additional entries pointing to the <application_floating_ip>. This action makes the API and applications accessible to only you, which is not suitable for production deployment, but does allow installation for development and testing.

  4. Add the FIPs to the install-config.yaml file as the values of the following parameters:

    • platform.openstack.ingressFloatingIP

    • platform.openstack.apiFloatingIP

If you use these values, you must also enter an external network as the value of the platform.openstack.externalNetwork parameter in the install-config.yaml file.

You can make OKD resources available outside of the cluster by assigning a floating IP address and updating your firewall configuration.

Completing installation without floating IP addresses

You can install OKD on OpenStack without providing floating IP addresses.

In the install-config.yaml file, do not define the following parameters:

  • platform.openstack.ingressFloatingIP

  • platform.openstack.apiFloatingIP

If you cannot provide an external network, you can also leave platform.openstack.externalNetwork blank. If you do not provide a value for platform.openstack.externalNetwork, a router is not created for you, and, without additional action, the installer will fail to retrieve an image from Glance. You must configure external connectivity on your own.

If you run the installer from a system that cannot reach the cluster API due to a lack of floating IP addresses or name resolution, installation fails. To prevent installation failure in these cases, you can use a proxy network or run the installer from a system that is on the same network as your machines.

You can enable name resolution by creating DNS records for the API and Ingress ports. For example:

api.<cluster_name>.<base_domain>.  IN  A  <api_port_IP>
*.apps.<cluster_name>.<base_domain>. IN  A <ingress_port_IP>

If you do not control the DNS server, you can add the record to your /etc/hosts file. This action makes the API accessible to only you, which is not suitable for production deployment but does allow installation for development and testing.

Deploying the cluster

You can install OKD on a compatible cloud platform.

You can run the create cluster command of the installation program only once, during initial installation.

Prerequisites
  • Obtain the OKD installation program and the pull secret for your cluster.

  • Verify the cloud provider account on your host has the correct permissions to deploy the cluster. An account with incorrect permissions causes the installation process to fail with an error message that displays the missing permissions.

Procedure
  • Change to the directory that contains the installation program and initialize the cluster deployment:

    $ ./openshift-install create cluster --dir <installation_directory> \ (1)
        --log-level=info (2)
    
    1 For <installation_directory>, specify the location of your customized ./install-config.yaml file.
    2 To view different installation details, specify warn, debug, or error instead of info.
Verification

When the cluster deployment completes successfully:

  • The terminal displays directions for accessing your cluster, including a link to the web console and credentials for the kubeadmin user.

  • Credential information also outputs to <installation_directory>/.openshift_install.log.

Do not delete the installation program or the files that the installation program creates. Both are required to delete the cluster.

Example output
...
INFO Install complete!
INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig'
INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com
INFO Login to the console with user: "kubeadmin", and password: "password"
INFO Time elapsed: 36m22s
  • The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information.

  • It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation.

Verifying cluster status

You can verify your OKD cluster’s status during or after installation.

Procedure
  1. In the cluster environment, export the administrator’s kubeconfig file:

    $ export KUBECONFIG=<installation_directory>/auth/kubeconfig (1)
    1 For <installation_directory>, specify the path to the directory that you stored the installation files in.

    The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server.

  2. View the control plane and compute machines created after a deployment:

    $ oc get nodes
  3. View your cluster’s version:

    $ oc get clusterversion
  4. View your Operators' status:

    $ oc get clusteroperator
  5. View all running pods in the cluster:

    $ oc get pods -A

Logging in to the cluster by using the CLI

You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OKD installation.

Prerequisites
  • You deployed an OKD cluster.

  • You installed the oc CLI.

Procedure
  1. Export the kubeadmin credentials:

    $ export KUBECONFIG=<installation_directory>/auth/kubeconfig (1)
    1 For <installation_directory>, specify the path to the directory that you stored the installation files in.
  2. Verify you can run oc commands successfully using the exported configuration:

    $ oc whoami
    Example output
    system:admin
Additional resources
Additional resources

Next steps