This is a cache of https://docs.openshift.com/container-platform/3.11/install_config/configuring_openstack.html. It is a snapshot of the page at 2024-11-30T02:05:34.554+0000.
Configuring for OpenStack | Configuring Clusters | OpenShift Container Platform 3.11
×

Overview

When deployed on OpenStack, OpenShift Container Platform can be configured to access the OpenStack infrastructure, including using OpenStack Cinder volumes as persistent storage for application data.

OpenShift Container Platform 3.11 is supported for use with Red Hat OpenStack Platform 13.

The latest OpenShift Container Platform release supports both the latest Red Hat OpenStack Platform long life release and intermediate release. The release cycles of OpenShift Container Platform and Red Hat OpenStack Platform are different and versions tested may vary in the future depending on the release dates of both products.

Before you Begin

OpenShift Container Platform SDN

The default OpenShift Container Platform SDN is OpenShiftSDN. There is another option: use Kuryr SDN.

Kuryr SDN

Kuryr is a CNI plug-in that uses Neutron and Octavia to provide networking for pods and services. It is primarily designed for OpenShift Container Platform clusters that run on OpenStack virtual machines. Kuryr improves the network performance by plugging OpenShift Container Platform pods into OpenStack SDN. In addition it provides interconnectivity between OpenShift Container Platform pods and OpenStack virtual instances.

Kuryr is recommended for OpenShift Container Platform deployments on encapsulated OpenStack tenant networks in order to avoid double encapsulation, such as running an encapsulated OpenShift SDN over an OpenStack network. Kuryr is recommended whenever VXLAN, GRE, or GENEVE are required.

Conversely, implementing Kuryr does not make sense in the following cases:

  • You use provider networks, tenant VLANs, or a third party commercial SDN such as Cisco ACI or Juniper Contrail.

  • The deployment will use many services on a few hypervisors, or OpenShift Container Platform virtual machine nodes. Each OpenShift Container Platform service creates an Octavia Amphora virtual machine in OpenStack that hosts a required load balancer.

To enable Kuryr SDN, your environment must meet the following requirements:

  • Running OpenStack 13 or later

  • Overcloud with Octavia

  • Neutron Trunk ports extension enabled

  • If ML2/OVS Neutron driver is used the OpenvSwitch firewall driver must be used, instead of the ovs-hybrid one.

To use Kuryr with OpenStack 13.0.13, the Kuryr container images must be version 3.11.306 or higher.

OpenShift Container Platform Prerequisites

A successful deployment of OpenShift Container Platform requires many prerequisites. This consists of a set of infrastructure and host configuration steps prior to the actual installation of OpenShift Container Platform using Ansible. In the following subsequent sections, details regarding the prerequisites and configuration changes required for an OpenShift Container Platform on a OpenStack environment are discussed in detail.

All of the OpenStack CLI commands in this reference environment are executed using the CLI openstack commands within a different node from the director node. The commands are executed in the other node to avoid package conflicts with Ansible version 2.6 and above. Be sure to install the following packages in the specified repositories.

Example:

Enable the rhel-7-server-openstack-13-tools-rpms and the required OpenShift Container Platform repositories from Set Up Repositories.

$ sudo subscription-manager repos \
--enable rhel-7-server-openstack-{rhosp_version}-tools-rpms \
--enable rhel-7-server-openstack-14-tools-rpms
$ sudo subscription-manager repo-override --repo=rhel-7-server-openstack-14-tools-rpms --add=includepkgs:"python2-openstacksdk.* python2-keystoneauth1.* python2-os-service-types.*"
$ sudo yum install -y python2-openstackclient python2-heatclient python2-octaviaclient ansible

Verify the packages are of at least the following versions (use rpm -q <package_name>):

  • python2-openstackclient - 3.14.1.-1

  • python2-heatclient 1.14.0-1

  • python2-octaviaclient 1.4.0-1

  • python2-openstacksdk 0.17.2

Enabling Octavia: OpenStack Load Balancing as a Service (LBaaS)

Octavia is a supported load balancer solution that is recommended to be used in conjunction with OpenShift Container Platform in order to load balance the external incoming traffic and provide a single view of the OpenShift Container Platform master services for the applications.

In order to enable Octavia, the Octavia service must be included during the installation of the OpenStack overcloud or upgraded if the overcloud already exists. The following steps provide basic non-custom steps in enabling Octavia and apply to both either a clean install of the overcloud or an overcloud update.

The following steps only capture the key pieces required during the deployment of OpenStack when dealing with Octavia. For more information visit the documentation of Installation of OpenStack. It is also important to note that registry methods vary. For more information visit the documentation on Registry Methods. This example used the local registry method.

If using the local registry, create a template to upload the images to the registry. Example shown below.

(undercloud) $ openstack overcloud container image prepare \
-e /usr/share/openstack-tripleo-heat-templates/environments/services-docker/octavia.yaml \
--namespace=registry.access.redhat.com/rhosp13 \
--push-destination=<local-ip-from-undercloud.conf>:8787 \
--prefix=openstack- \
--tag-from-label {version}-{release} \
--output-env-file=/home/stack/templates/overcloud_images.yaml \
--output-images-file /home/stack/local_registry_images.yaml

Verify that the created local_registry_images.yaml contains the Octavia images.

Octavia images in local registry file
...
- imagename: registry.access.redhat.com/rhosp13/openstack-octavia-api:13.0-43
  push_destination: <local-ip-from-undercloud.conf>:8787
- imagename: registry.access.redhat.com/rhosp13/openstack-octavia-health-manager:13.0-45
  push_destination: <local-ip-from-undercloud.conf>:8787
- imagename: registry.access.redhat.com/rhosp13/openstack-octavia-housekeeping:13.0-45
  push_destination: <local-ip-from-undercloud.conf>:8787
- imagename: registry.access.redhat.com/rhosp13/openstack-octavia-worker:13.0-44
  push_destination: <local-ip-from-undercloud.conf>:8787

The versions of the Octavia containers will vary depending upon the specific Red Hat OpenStack Platform release installed.

The following step pulls the container images from registry.redhat.io to the undercloud node. This process might take some time depending on the speed of the network and undercloud disk.

(undercloud) $ sudo openstack overcloud container image upload \
  --config-file  /home/stack/local_registry_images.yaml \
  --verbose

As an Octavia Load Balancer is used to access the OpenShift API, there is a need to increase their listeners default timeouts for the connections. The default timeout is 50 seconds. Increase the timeout to 20 minutes by passying the following file to the overcloud deploy command:

(undercloud) $ cat octavia_timeouts.yaml
parameter_defaults:
  OctaviaTimeoutClientData: 1200000
  OctaviaTimeoutMemberData: 1200000

This is not needed from Red Hat OpenStack Platform 14 and onwards.

Install or update your overcloud environment with Octavia:

openstack overcloud deploy --templates \
.
.
.
  -e /usr/share/openstack-tripleo-heat-templates/environments/services-docker/octavia.yaml \
  -e octavia_timeouts.yaml
.
.
.

The command above only includes the files associated with Octavia. This command will vary based upon your specifc installation of OpenStack. See the official OpenStack documentation for further information. For more information on customizing your Octavia installation, see installation of Octavia using Director.

If Kuryr SDN is used, the overcloud installation requires the "trunk" extension to be enabled at Neutron. This is enabled by default on Director deployments. Use the openvswitch firewall instead of the default ovs-hybrid when the Neutron backend is ML2/OVS. There is no need for modifications if the backend is ML2/OVN.

Creating OpenStack User Accounts, Projects, and Roles

Before installing OpenShift Container Platform, the Red Hat OpenStack Platform (RHOSP) environment requires a project, often referred to as a tenant, that stores the OpenStack instances that are to install the OpenShift Container Platform. This project requires ownership by a user and the role of that user to be set to _member_.

The following steps show how to accomplish the above.

As the OpenStack overcloud administrator,

  1. Create a project (tenant) that is to store the RHOSP instances

    $ openstack project create <project>
  2. Create a RHOSP user that has ownership of the previously created project:

    $ openstack user create --password <password> <username>
  3. Set the role of the user:

    $ openstack role add --user <username> --project <project> _member_

The default quotas assigned to new RH OSP projects are not high enough for OpenShift Container Platform installations. Increase the quotas to at least 30 security groups, 200 security group rules, and 200 ports.

$ openstack quota set --secgroups 30 --secgroup-rules 200 --ports 200 <project>
(1)
1 For <project>, specify the name of the project to modify

Extra steps for Kuryr SDN

If Kuryr SDN is enabled, especially if you are using namespace isolation, increase your project’s quotas to meet these minimum requirements:

  • 300 security groups - one for each namespace plus one for each load balancer

  • 150 networks - one for each namespace

  • 150 subnets - one for each namespace

  • 500 security group rules

  • 500 ports - one port per Pod and additional ports for pools to speed up Pod creation

This is not a global recommendation. Adjust your quotas to meet your requirements.

If you are using namespace isolation, each namespace is given a new network and subnet. Additionally, a security group is created to enable traffic between Pods in the namespace.

$ openstack quota set --networks 150 --subnets 150 --secgroups 300 --secgroup-rules 500 --ports 500 <project>
(1)
1 For <project>, specify the name of the project to modify

If you enabled namespace isolation, you must add the project ID to the octavia.conf configuration file after you create the project. This step ensures that required LoadBalancer security groups belong to that project and that they can be updated to enforce services isolation across namespaces.

  1. Get the project ID

    $ openstack project show *<project>*
    +-------------+----------------------------------+
    | Field       | Value                            |
    +-------------+----------------------------------+
    | description |                                  |
    | domain_id   | default                          |
    | enabled     | True                             |
    | id          | PROJECT_ID                       |
    | is_domain   | False                            |
    | name        | *<project>*                      |
    | parent_id   | default                          |
    | tags        | []                               |
    +-------------+----------------------------------+
  2. Add the project ID to [filename]octavia.conf on the controllers and restart octavia worker.

    $ source stackrc  # undercloud credentials
    $ openstack server list
    +--------------------------------------+--------------+--------+-----------------------+----------------+------------+
    │
    | ID                                   | Name         | Status | Networks
    | Image          | Flavor     |
    │
    +--------------------------------------+--------------+--------+-----------------------+----------------+------------+
    │
    | 6bef8e73-2ba5-4860-a0b1-3937f8ca7e01 | controller-0 | ACTIVE |
    ctlplane=192.168.24.8 | overcloud-full | controller |
    │
    | dda3173a-ab26-47f8-a2dc-8473b4a67ab9 | compute-0    | ACTIVE |
    ctlplane=192.168.24.6 | overcloud-full | compute    |
    │
    +--------------------------------------+--------------+--------+-----------------------+----------------+------------+
    
    $ ssh heat-admin@192.168.24.8  # ssh into the controller(s)
    
    controller-0$ vi /var/lib/config-data/puppet-generated/octavia/etc/octavia/octavia.conf
    [controller_worker]
    # List of project ids that are allowed to have Load balancer security groups
    # belonging to them.
    amp_secgroup_allowed_projects = PROJECT_ID
    
    controller-0$ sudo docker restart octavia_worker

Configuring the RC file

After you configure the project, an OpenStack administrator can create an RC file with all the required information to the user(s) implementing the OpenShift Container Platform environment.

An example RC file:

$ cat path/to/examplerc
# Clear any old environment that may conflict.
for key in $( set | awk '{FS="="}  /^OS_/ {print $1}' ); do unset $key ; done
export OS_PROJECT_DOMAIN_NAME=Default
export OS_USER_DOMAIN_NAME=Default
export OS_PROJECT_NAME=<project-name>
export OS_USERNAME=<username>
export OS_PASSWORD=<password>
export OS_AUTH_URL=http://<ip>:5000//v3
export OS_CLOUDNAME=<cloud-name>
export OS_IDENTITY_API_VERSION=3

# Add OS_CLOUDNAME to PS1
if [ -z "${CLOUDPROMPT_ENABLED:-}" ]; then
	export PS1=${PS1:-""}
	export PS1=\${OS_CLOUDNAME:+"(\$OS_CLOUDNAME)"}\ $PS1
	export CLOUDPROMPT_ENABLED=1
fi

Changing _OS_PROJECT_DOMAIN_NAME and _OS_USER_DOMAIN_NAME from the Default value is supported as long as both reference the same domain.

As the user(s) implementing the OpenShift Container Platform environment, within the OpenStack director node or workstation, ensure to source the credentials as follows:

$ source path/to/examplerc

Create an OpenStack Flavor

Within OpenStack, flavors define the size of a virtual server by defining the compute, memory, and storage capacity of nova computing instances. Since the base image within this reference architecture is Red Hat Enterprise Linux 7.5, a m1.node and m1.master sized flavor is created with the following specifications as shown in Minimum System Requirements for OpenShift.

Although the minimum system requirements are sufficient to run a cluster, to improve performance, it is recommended to increase vCPU on master nodes. Additionally, more memory is recommended if etcd is co-located on the master nodes.

Table 1. Minimum System Requirements for OpenShift
Node Type CPU RAM Root Disk Flavor

Masters

4

16 GB

45 GB

m1.master

Nodes

1

8 GB

20 GB

m1.node

As an OpenStack administrator,

$ openstack flavor create <flavor_name> \
    --id auto \
    --ram <ram_in_MB> \
    --disk <disk_in_GB> \
    --vcpus <num_vcpus>

An example below showing the creation of flavors within this reference environment.

$ openstack flavor create m1.master \
    --id auto \
    --ram 16384 \
    --disk 45 \
    --vcpus 4
$ openstack flavor create m1.node \
    --id auto \
    --ram 8192 \
    --disk 20 \
    --vcpus 1
If access to OpenStack administrator privileges to create new flavors is unavailable, use existing flavors within the OpenStack environment that meet the requirements in Minimum System Requirements for OpenShift.

Verification of the OpenStack flavors via:

$ openstack flavor list

Creating an OpenStack Keypair

Red Hat OpenStack Platform uses cloud-init to place an ssh public key on each instance as it is created to allow ssh access to the instance. Red Hat OpenStack Platform expects the user to hold the private key.

Losing the private key will cause the inability to access the instances.

To generate a keypair, use the following command:

$ openstack keypair create <keypair-name> > /path/to/<keypair-name>.pem

Verification of the keypair creation can be done via:

$ openstack keypair list

Once the keypair is created, set the permissions to 600 thus only allowing the owner of the file to read and write to that file.

$ chmod 600 /path/to/<keypair-name>.pem

Setting up DNS for OpenShift Container Platform

DNS service is an important component in the OpenShift Container Platform environment. Regardless of the provider of DNS, an organization is required to have certain records in place to serve the various OpenShift Container Platform components.

Using /etc/hosts is not valid, a proper DNS service must exist.

Using the key secret of the DNS, you can provide the information to the OpenShift Ansible Installer and it will automatically add A records for the target instances and the various OpenShift Container Platform components. This process setup is described later when configuring the OpenShift Ansible Installer.

Access to a DNS server is expected. You can use Red Hat Labs DNS Helper for assistance with access.

Application DNS

Applications served by OpenShift are accessible by the router on ports 80/TCP and 443/TCP. The router uses a wildcard record to map all host names under a specific sub domain to the same IP address without requiring a separate record for each name.

This allows OpenShift Container Platform to add applications with arbitrary names as long as they are under that sub domain.

For example, a wildcard record for *.apps.example.com causes DNS name lookups for tax.apps.example.com and home-goods.apps.example.com to both return the same IP address: 10.19.x.y. All traffic is forwarded to the OpenShift Routers. The Routers examine the HTTP headers of the queries and forward them to the correct destination.

With a load-balancer such as Octavia, host address of 10.19.x.y, the wildcard DNS record can be added as follows:

Table 2. Load Balancer DNS records
IP Address Hostname Purpose

10.19.x.y

*.apps.example.com

User access to application web services

Creation of OpenShift Container Platform Networks via OpenStack

When deploying OpenShift Container Platform on Red Hat OpenStack Platform as described in this segment, the requirements are two networks — public and internal network.

Public Network

The public network is a network that contains external access and can be reached by the outside world. The public network creation can be only done by an OpenStack administrator.

The following commands provide an example of creating an OpenStack provider network for public network access.

As an OpenStack administrator (overcloudrc access),

$ source /path/to/examplerc

$ openstack network create <public-net-name> \
  --external \
  --provider-network-type flat \
  --provider-physical-network datacentre

$ openstack subnet create <public-subnet-name> \
  --network <public-net-name> \
  --dhcp \
  --allocation-pool start=<float_start_ip>,end=<float_end_ip> \
  --gateway <ip> \
  --subnet-range <CIDR>

Once the network and subnet have been created verify via:

$ openstack network list
$ openstack subnet list

<float_start_ip> and <float_end_ip> are the associated floating IP pool provided to the network labeled public network. The Classless Inter-Domain Routing (CIDR) uses the format <ip>/<routing_prefix>, i.e. 10.0.0.1/24.

Internal Network

The internal network is connected to the public network via a router during the network setup. This allows each Red Hat OpenStack Platform instance attached to the internal network the ability to request a floating IP from the public network for public access. The internal network is created automically by the OpenShift Ansible installer via setting the openshift_openstack_private_network_name. More information regarding changes required for the OpenShift Ansible installer are described later.

Creating OpenStack Deployment Host Security Group

OpenStack networking allows the user to define inbound and outbound traffic filters that can be applied to each instance on a network. This allows the user to limit network traffic to each instance based on the function of the instance services and not depend on host based filtering. The OpenShift Ansible installer handles the proper creation of all the ports and services required for each type of host that is part of the OpenShift Container Platform cluster except for the deployment host.

The following command creates an empty security group with no rules set for the deployment host.

$ source path/to/examplerc
$ openstack security group create <deployment-sg-name>

Verify the creation of the security group:

$ openstack security group list

Deployment Host Security Group

The deployment instance only needs to allow inbound ssh. This instance exists to give operators a stable base to deploy, monitor and manage the OpenShift Container Platform environment.

Table 3. Deployment Host Security Group TCP ports
Port/Protocol Service Remote source Purpose

ICMP

ICMP

Any

Allow ping, traceroute, etc.

22/TCP

SSH

Any

Secure shell login

Creation of the above security group rules is as follows:

$ source /path/to/examplerc
$ openstack security group rule create \
    --ingress \
    --protocol icmp \
    <deployment-sg-name>
$ openstack security group rule create \
    --ingress \
    --protocol tcp \
    --dst-port 22 \
    <deployment-sg-name>

Verification of the security group rules is as follows:

$ openstack security group rule list <deployment-sg-name>
+--------------------------------------+-------------+-----------+------------+-----------------------+
| ID                                   | IP Protocol | IP Range  | Port Range | Remote Security Group |
+--------------------------------------+-------------+-----------+------------+-----------------------+
| 7971fc03-4bfe-4153-8bde-5ae0f93e94a8 | icmp        | 0.0.0.0/0 |            | None                  |
| b8508884-e82b-4ee3-9f36-f57e1803e4a4 | None        | None      |            | None                  |
| cb914caf-3e84-48e2-8a01-c23e61855bf6 | tcp         | 0.0.0.0/0 | 22:22      | None                  |
| e8764c02-526e-453f-b978-c5ea757c3ac5 | None        | None      |            | None                  |
+--------------------------------------+-------------+-----------+------------+-----------------------+

OpenStack Cinder Volumes

OpenStack Block Storage provides persistent block storage management via the cinder service. Block storage enables the OpenStack user to create a volume that may be attached to different OpenStack instances.

Docker Volume

The master and node instances contain a volume to store docker images. The purpose of the volume is to ensure that a large image or container does not compromise node performance or abilities of the existing node.

A docker volume of a minimum of 15GB is required for running containers. This may need adjustment depending on the size and number of containers each node will run.

The docker volume is created by the OpenShift Ansible installer via the variable openshift_openstack_docker_volume_size. More information regarding changes required for the OpenShift Ansible installer are described later.

Registry volume

The OpenShift image registry requires a cinder volume to ensure that images are saved in the event that the registry needs to migrate to another node. The following steps show how to create the image registry via OpenStack. Once the volume is created, the volume ID will be included in the OpenShift Ansible Installer OSEv3.yml file via the parameter openshift_hosted_registry_storage_openstack_volumeID as described later.

$ source /path/to/examplerc
$ openstack volume create --size <volume-size-in-GB> <registry-name>
The registry volume size should be at least 30GB.

Verify the creation of the volume.

$ openstack volume list
----------------------------------------+------------------------------------------------+
| ID                                   | Name          | Status    | Size | Attached to  |
+--------------------------------------+-------------------------------------------------+
| d65209f0-9061-4cd8-8827-ae6e2253a18d | <registry-name>| available |   30 |              |
+--------------------------------------+-------------------------------------------------+

Creating and Configuring the Deployment Instance

The role of the deployment instance is to serve as a utility host for the deployment and management of OpenShift Container Platform.

Creating the Deployment Host Network and Router

Prior to instance creation, an internal network and router must be created for communication with the deployment host. The following commands create that network and router.

$ source path/to/examplerc

$ openstack network create <deployment-net-name>

$ openstack subnet create --network <deployment-net-name> \
  --subnet-range <subnet_range> \
  --dns-nameserver <dns-ip> \
  <deployment-subnet-name>

$ openstack router create <deployment-router-name>

$ openstack router set --external-gateway <public-net-name> <deployment-router-name>

$ openstack router add subnet <deployment-router-name> <deployment-subnet-name>

Deploying the Deployment Instance

With the network and security group created, deploy the instance.

$ domain=<domain>
$ netid1=$(openstack network show <deployment-net-name> -f value -c id)
$ openstack server create \
    --nic net-id=$netid1 \
    --flavor <flavor> \
    --image <image> \
    --key-name <keypair> \
    --security-group <deployment-sg-name> \
    deployment.$domain
If the m1.small flavor does not exist by default then use an existing flavor that meets the requirements of 1 vCPU and 2GB of RAM.

Creating and Adding Floating IP to the Deployment Instance

Once the deployment instance is created, a floating IP must be created and then allocated to the instance. The following shows an example.

$ source /path/to/examplerc
$ openstack floating ip create <public-network-name>
+---------------------+--------------------------------------+
| Field               | Value                                |
+---------------------+--------------------------------------+
| created_at          | 2017-08-24T22:44:03Z                 |
| description         |                                      |
| fixed_ip_address    | None                                 |
| floating_ip_address | 10.20.120.150                       |
| floating_network_id | 084884f9-d9d2-477a-bae7-26dbb4ff1873 |
| headers             |                                      |
| id                  | 2bc06e39-1efb-453e-8642-39f910ac8fd1 |
| port_id             | None                                 |
| project_id          | ca304dfee9a04597b16d253efd0e2332     |
| project_id          | ca304dfee9a04597b16d253efd0e2332     |
| revision_number     | 1                                    |
| router_id           | None                                 |
| status              | DOWN                                 |
| updated_at          | 2017-08-24T22:44:03Z                 |
+---------------------+--------------------------------------+

Within the above output, the floating_ip_address field shows that the floating IP 10.20.120.150 is created. In order to assign this IP to the deployment instance, run the following command:

$ source /path/to/examplerc
$ openstack server add floating ip <deployment-instance-name> <ip>

For example, if instance deployment.example.com is to be assigned IP 10.20.120.150 the command would be:

$ source /path/to/examplerc
$ openstack server add floating ip deployment.example.com 10.20.120.150

Adding the RC File to the Deployment Host

Once the deployment host exists, copy the RC file created earlier to the deployment host via scp as follows

scp <rc-file-deployment-host> cloud-user@<ip>:/home/cloud-user/

Deployment Host Configuration for OpenShift Container Platform

The following subsections describe all the steps needed to properly configure the deployment instance.

Configure ~/.ssh/config to use Deployment Host as a Jumphost

To easily connect to the OpenShift Container Platform environment, follow the steps below.

On the OpenStack director node or local workstation with the private key, <keypair-name>.pem:

$ exec ssh-agent bash

$ ssh-add /path/to/<keypair-name>.pem
Identity added: /path/to/<keypair-name>.pem (/path/to/<keypair-name>.pem)

Add to the ~/.ssh/config file:

Host deployment
    HostName        <deployment_fqdn_hostname OR IP address>
    User            cloud-user
    IdentityFile    /path/to/<keypair-name>.pem
    ForwardAgent     yes

ssh into the deployment host with the -A option that enables forwarding of the authentication agent connection.

Ensure the permissions are read write only for the owner of the ~/.ssh/config file:

$ chmod 600 ~/.ssh/config
$ ssh -A cloud-user@deployment

Once logged into the deployment host, verify the ssh agent forwarding is working via checking for the SSH_AUTH_SOCK

$ echo "$SSH_AUTH_SOCK"
/tmp/ssh-NDFDQD02qB/agent.1387

Subscription Manager and Enabling OpenShift Container Platform Repositories

Within the deployment instance, register it with the Red Hat Subscription Manager. This can be accomplished by using credentials:

$ sudo subscription-manager register --username <user> --password '<password>'

Alternatively, you can use an activation key:

$ sudo subscription-manager register --org="<org_id>" --activationkey=<keyname>

Once registered, enable the following repositories as follows.

$ sudo subscription-manager repos \
    --enable="rhel-7-server-rpms" \
    --enable="rhel-7-server-extras-rpms" \
    --enable="rhel-7-server-ose-3.11-rpms" \
    --enable="rhel-7-server-ansible-2.6-rpms" \
    --enable="rhel-7-server-openstack-13-rpms" \
    --enable="rhel-7-server-openstack-13-tools-rpms"

Refer to the Set Up Repositories to confirm the proper OpenShift Container Platform repositories and Ansible versions to enable. The above file is just a sample.

Required Packages on the Deployment Host

The following packages are required to be installed on the deployment host.

Install the following packages:

  • openshift-ansible

  • python-openstackclient

  • python2-heatclient

  • python2-octaviaclient

  • python2-shade

  • python-dns

  • git

  • ansible

$ sudo yum -y install openshift-ansible python-openstackclient python2-heatclient python2-octaviaclient python2-shade python-dns git ansible

Configure Ansible

ansible is installed on the deployment instance to perform the registration, installation of packages, and the deployment of the OpenShift Container Platform environment on the master and node instances.

Before running playbooks, it is important to create an ansible.cfg file to reflect the environment you wish to deploy:

$ cat ~/ansible.cfg

[defaults]
forks = 20
host_key_checking = False
remote_user = openshift
gathering = smart
fact_caching = jsonfile
fact_caching_connection = $HOME/ansible/facts
fact_caching_timeout = 600
log_path = $HOME/ansible.log
nocows = 1
callback_whitelist = profile_tasks
inventory = /usr/share/ansible/openshift-ansible/playbooks/openstack/inventory.py,/home/cloud-user/inventory

[ssh_connection]
ssh_args = -o ControlMaster=auto -o ControlPersist=600s -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=false
control_path = %(directory)s/%%h-%%r
pipelining = True
timeout = 10

[persistent_connection]
connect_timeout = 30
connect_retries = 30
connect_interval = 1

The following parameters values are important to the ansible.cfg file.

  • The remote_user must remain as the user openshift.

  • The inventory parameter ensure that there is no space between the two inventories.

Example: inventory = path/to/inventory1,path/to/inventory2

The code block above can overwrite the default values in the file. Ensure to populate <keypair-name> with the keypair that was copied to the deployment instance.

The inventory folder is created in Preparing the Inventory for Provisioning.

OpenShift Authentication

OpenShift Container Platform provides the ability to use many different authentication platforms. A listing of authentication options are available at Configuring Authentication and User Agent.

Configuring the default identity provider is important as the default configuration is to Deny All.

Provisioning OpenShift Container Platform Instances using the OpenShift Ansible Playbooks

Once the creation and configuration of the deployment host is complete, we turn to preparing the environment for the deployment of OpenShift Container Platform using Ansible. In the following subsections, Ansible is configured and certain YAML files are modified to achieve a successful OpenShift Container Platform on OpenStack deployment.

Preparing the Inventory for Provisioning

With the installation of the openshift-ansible package complete via our previous steps, there resides a sample-inventory directory that we will copy to our cloud-user home directory of the deployment host.

On the deployment host,

$ cp -r /usr/share/ansible/openshift-ansible/playbooks/openstack/sample-inventory/ ~/inventory

Within this inventory directory, the all.yml file contains all the different parameters that must be set in to order to achieve successful provisioning of the RHOCP instances. The OSEv3.yml file contains some references required by the all.yml file and all the available OpenShift Container Platform cluster parameters that you can customize.

OpenShiftSDN All YAML file

The all.yml file has many options that can be modified to meet your specific needs. The information gathered in this file is for the provisioning portion of the instances required for a successful deployment of OpenShift Container Platform. It is important to review these carefully. This document will provide a condensed version of the All YAML file and focus on the most critical parameters that need to be set for a successful deployment.

$ cat ~/inventory/group_vars/all.yml
---
openshift_openstack_clusterid: "openshift"
openshift_openstack_public_dns_domain: *"example.com"*
openshift_openstack_dns_nameservers: *["10.19.115.228"]*
openshift_openstack_public_hostname_suffix: "-public"
openshift_openstack_nsupdate_zone: "{{ openshift_openstack_public_dns_domain }}"

openshift_openstack_keypair_name: *"openshift"*
openshift_openstack_external_network_name: *"public"*

openshift_openstack_default_image_name: *"rhel75"*

## Optional (Recommended) - This removes the need for floating IPs
## on the OpenShift Cluster nodes
openshift_openstack_node_subnet_name: *<deployment-subnet-name>*
openshift_openstack_router_name: *<deployment-router-name>*
openshift_openstack_master_floating_ip: *false*
openshift_openstack_infra_floating_ip: *false*
openshift_openstack_compute_floating_ip: *false*
## End of Optional Floating IP section

openshift_openstack_num_masters: *3*
openshift_openstack_num_infra: *3*
openshift_openstack_num_cns: *0*
openshift_openstack_num_nodes: *2*

openshift_openstack_master_flavor: *"m1.master"*
openshift_openstack_default_flavor: *"m1.node"*

openshift_openstack_use_lbaas_load_balancer: *true*

openshift_openstack_docker_volume_size: "15"

# # Roll-your-own DNS
*openshift_openstack_external_nsupdate_keys:*
  public:
    *key_secret: '/alb8h0EAFWvb4i+CMA12w=='*
    *key_name: "update-key"*
    *key_algorithm: 'hmac-md5'*
    *server: '<ip-of-DNS>'*
  private:
    *key_secret: '/alb8h0EAFWvb4i+CMA12w=='*
    *key_name: "update-key"*
    *key_algorithm: 'hmac-md5'*
    *server: '<ip-of-DNS>'*

ansible_user: openshift

## cloud config
openshift_openstack_disable_root: true
openshift_openstack_user: openshift
Due to using an external DNS server, the private and public sections use the public IP address of the DNS server as the DNS server does not reside in the OpenStack environment.

The values above that are enclosed by asterisks (*) require modification based upon your OpenStack environment and DNS server.

In order to properly modify the DNS portion of the All YAML file, login to the DNS server and perform the following commands to capture the key name, key algorithm and key secret:

$ ssh <ip-of-DNS>
$ sudo -i
# cat /etc/named/<key-name.key>
key "update-key" {
	algorithm hmac-md5;
	secret "/alb8h0EAFWvb4i+CMA02w==";
};
The key name may vary and the above is only an example.

KuryrSDN All YAML file

The following all.yml file enables Kuryr SDN instead of the default OpenShiftSDN. Note that the example below is a condensed version and it is important to review the default template carefully.

$ cat ~/inventory/group_vars/all.yml
---
openshift_openstack_clusterid: "openshift"
openshift_openstack_public_dns_domain: *"example.com"*
openshift_openstack_dns_nameservers: *["10.19.115.228"]*
openshift_openstack_public_hostname_suffix: "-public"
openshift_openstack_nsupdate_zone: "{{ openshift_openstack_public_dns_domain }}"

openshift_openstack_keypair_name: *"openshift"*
openshift_openstack_external_network_name: *"public"*

openshift_openstack_default_image_name: *"rhel75"*

## Optional (Recommended) - This removes the need for floating IPs
## on the OpenShift Cluster nodes
openshift_openstack_node_subnet_name: *<deployment-subnet-name>*
openshift_openstack_router_name: *<deployment-router-name>*
openshift_openstack_master_floating_ip: *false*
openshift_openstack_infra_floating_ip: *false*
openshift_openstack_compute_floating_ip: *false*
## End of Optional Floating IP section

openshift_openstack_num_masters: *3*
openshift_openstack_num_infra: *3*
openshift_openstack_num_cns: *0*
openshift_openstack_num_nodes: *2*

openshift_openstack_master_flavor: *"m1.master"*
openshift_openstack_default_flavor: *"m1.node"*

## Kuryr configuration
openshift_use_kuryr: True
openshift_use_openshift_sdn: False
use_trunk_ports: True
os_sdn_network_plugin_name: cni
openshift_node_proxy_mode: userspace
kuryr_openstack_pool_driver: nested
openshift_kuryr_precreate_subports: 5

kuryr_openstack_public_net_id: *<public_ID>*

# To disable namespace isolation, comment out the next 2 lines
openshift_kuryr_subnet_driver: namespace
openshift_kuryr_sg_driver: namespace
# If you enable namespace isolation, `default` and `openshift-monitoring` become the
# global namespaces. Global namespaces can access all namespaces. All
# namespaces can access global namespaces.
# To make other namespaces global, include them here:
kuryr_openstack_global_namespaces: default,openshift-monitoring

# If OpenStack cloud endpoints are accessible over HTTPS, provide the CA certificate
kuryr_openstack_ca: *<path-to-ca-certificate>*

openshift_master_open_ports:
- service: dns tcp
  port: 53/tcp
- service: dns udp
  port: 53/udp
openshift_node_open_ports:
- service: dns tcp
  port: 53/tcp
- service: dns udp
  port: 53/udp

# To set the pod network CIDR range, uncomment the following property and set its value:
#
# openshift_openstack_kuryr_pod_subnet_prefixlen: 24
#
# The subnet prefix length value must be smaller than the CIDR value that is
# set in the inventory file as openshift_openstack_kuryr_pod_subnet_cidr.
# By default, this value is /24.

# openshift_portal_net is the range that OpenShift services and their associated Octavia
# load balancer VIPs use. Amphora VMs use Neutron ports in the range that is defined by
# openshift_openstack_kuryr_service_pool_start and openshift_openstack_kuryr_service_pool_end.
#
# The value of openshift_portal_net in the OSEv3.yml file must be within the range that is
# defined by openshift_openstack_kuryr_service_subnet_cidr. This range must be half
# of openshift_openstack_kuryr_service_subnet_cidr's range. This practice ensures that
# openshift_portal_net does not overlap with the range that load balancers' VMs use, which is
# defined by openshift_openstack_kuryr_service_pool_start and openshift_openstack_kuryr_service_pool_end.
#
# For reference only, copy the value in the next line from OSEv3.yml:
# openshift_portal_net: *"172.30.0.0/16"*

openshift_openstack_kuryr_service_subnet_cidr: *"172.30.0.0/15"*
openshift_openstack_kuryr_service_pool_start: *"172.31.0.1"*
openshift_openstack_kuryr_service_pool_end: *"172.31.255.253"*

# End of Kuryr configuration

openshift_openstack_use_lbaas_load_balancer: *true*

openshift_openstack_docker_volume_size: "15"

# # Roll-your-own DNS
*openshift_openstack_external_nsupdate_keys:*
  public:
    *key_secret: '/alb8h0EAFWvb4i+CMA12w=='*
    *key_name: "update-key"*
    *key_algorithm: 'hmac-md5'*
    *server: '<ip-of-DNS>'*
  private:
    *key_secret: '/alb8h0EAFWvb4i+CMA12w=='*
    *key_name: "update-key"*
    *key_algorithm: 'hmac-md5'*
    *server: '<ip-of-DNS>'*

ansible_user: openshift

## cloud config
openshift_openstack_disable_root: true
openshift_openstack_user: openshift

If you are using namespace isolation, the Kuryr-controller creates a new Neutron network and subnet for each namespace.

Network policies and nodeport services are not supported when Kuryr SDN is enabled.

If Kuryr is enabled, OpenShift Container Platform services are implemented through OpenStack Octavia Amphora VMs.

Octavia does not support UDP load balancing. Services that expose UDP ports are not supported.

Configuring global namespace access

The kuryr_openstack_global_namespace parameter contains a list that defines global namespaces. By default, only the default and openshift-monitoring namespaces are included in this list.

If you are upgrading from a previous z-release of OpenShift Container Platform 3.11, note that access to other namespaces from global namespaces is controlled by the security group *-allow_from_default.

Although the remote_group_id rule can control access to other namespaces from global namespaces, using it can cause scaling and connectivity problems. To avoid these problems, switch from using remote_group_id at *_allow_from_default to remote_ip_prefix:

  1. From a command line, retrieve your networks' subnetCIDR value:

    $ oc get kuryrnets ns-default -o yaml | grep subnetCIDR
      subnetCIDR: 10.11.13.0/24
  2. Create TCP and UDP rules for this range:

    $ openstack security group rule create --remote-ip 10.11.13.0/24 --protocol tcp openshift-ansible-openshift.example.com-allow_from_default
    $ openstack security group rule create --remote-ip 10.11.13.0/24 --protocol udp openshift-ansible-openshift.example.com-allow_from_default
  3. Remove the security group rule that uses remote_group_id:

    $ openstack security group show *-allow_from_default | grep remote_group_id
    $ openstack security group rule delete REMOTE_GROUP_ID
Table 4. Description of Variables in the All YAML file
Variable Description

openshift_openstack_clusterid

Cluster identification name

openshift_openstack_public_dns_domain

Public DNS domain name

openshift_openstack_dns_nameservers

IP of DNS nameservers

openshift_openstack_public_hostname_suffix

Adds a suffix to the node hostname in the DNS record for both public and private

openshift_openstack_nsupdate_zone

Zone to be updated with OCP instance IPs

openshift_openstack_keypair_name

Keypair name used to log in to OCP instances

openshift_openstack_external_network_name

OpenStack public network name

openshift_openstack_default_image_name

OpenStack image used for OCP instances

openshift_openstack_num_masters

Number of master nodes to deploy

openshift_openstack_num_infra

Number of infrastructure nodes to deploy

openshift_openstack_num_cns

Number of container native storage nodes to deploy

openshift_openstack_num_nodes

Number of application nodes to deploy

openshift_openstack_master_flavor

Name of the OpenStack flavor used for master instances

openshift_openstack_default_flavor

Name of the Openstack flavor used for all instances, if specific flavor not specified.

openshift_openstack_use_lbaas_load_balancer

Boolean value enabling Octavia load balancer (Octavia must be installed)

openshift_openstack_docker_volume_size

Minimum size of the Docker volume (required variable)

openshift_openstack_external_nsupdate_keys

Updating the DNS with the instance IP addresses

ansible_user

Ansible user used to deploy OpenShift Container Platform. "openshift" is the required name and must not be changed.

openshift_openstack_disable_root

Boolean value that disables root access

openshift_openstack_user

OCP instances created with this user

openshift_openstack_node_subnet_name

Name of existing OpenShift subnet to use for deployment. This should be the same subnet name used for your deployment host.

openshift_openstack_router_name

Name of existing OpenShift router to use for deployment. This should be the same router name used for your deployment host.

openshift_openstack_master_floating_ip

Default is true. Must set to false if you do not want floating IPs assigned to master nodes.

openshift_openstack_infra_floating_ip

Default is true. Must set to false if you do not want floating IPs assigned to infrastructure nodes.

openshift_openstack_compute_floating_ip

Default is true. Must set to false if you do not want floating IPs assigned to compute nodes.

openshift_use_openshift_sdn

Must set to false if you want to disable openshift-sdn

openshift_use_kuryr

Must set to true if you want to enable kuryr sdn

use_trunk_ports

Must be set to true to create the OpenStack VMs with trunk ports (required by kuryr)

os_sdn_network_plugin_name

selection of the SDN behavior. Must set to cni for kuryr

openshift_node_proxy_mode

Must set to userspace for Kuryr

openshift_master_open_ports

Ports to be opened on the VMs when using Kuryr

kuryr_openstack_public_net_id

Need by Kuryr. ID of the public OpenStack network from where FIPs are obtained

openshift_kuryr_subnet_driver

Kuryr Subnet driver. Must be namespace for creating a subnet per namespace

openshift_kuryr_sg_driver

Kuryr Security Group driver. Must be namespace for namespace isolation

kuryr_openstack_global_namespaces

Global namespaces to use for namespace isolation. The default values are default, openshift-monitoring.

kuryr_openstack_ca

Path to the CA certificate of the cloud. Required if OpenStack cloud endpoints are accessible over HTTPS.

OSEv3 YAML file

The OSEv3 YAML file specifies all the different parameters and customizations relating the installation of OpenShift.

Below is a condensed version of the file with all required variables for a successful deployment. Additional variables may be required depending on what customization is required for your specific OpenShift Container Platform deployment.

$ cat ~/inventory/group_vars/OSEv3.yml
---

openshift_deployment_type: openshift-enterprise
openshift_release: v3.11
oreg_url: registry.access.redhat.com/openshift3/ose-${component}:${version}
openshift_examples_modify_imagestreams: true
oreg_auth_user: <oreg_auth_user>
oreg_auth_password: <oreg_auth_pw>
# The following is required if you want to deploy the Operator Lifecycle Manager (OLM)
openshift_additional_registry_credentials: [{'host':'registry.connect.redhat.com','user':'REGISTRYCONNECTUSER','password':'REGISTRYCONNECTPASSWORD','test_image':'mongodb/enterprise-operator:0.3.2'}]

openshift_master_default_subdomain: "apps.{{ (openshift_openstack_clusterid|trim == '') | ternary(openshift_openstack_public_dns_domain, openshift_openstack_clusterid + '.' + openshift_openstack_public_dns_domain) }}"

openshift_master_cluster_public_hostname: "console.{{ (openshift_openstack_clusterid|trim == '') | ternary(openshift_openstack_public_dns_domain, openshift_openstack_clusterid + '.' + openshift_openstack_public_dns_domain) }}"

#OpenStack Credentials:
openshift_cloudprovider_kind: openstack
openshift_cloudprovider_openstack_auth_url: "{{ lookup('env','OS_AUTH_URL') }}"
openshift_cloudprovider_openstack_username: "{{ lookup('env','OS_USERNAME') }}"
openshift_cloudprovider_openstack_password: "{{ lookup('env','OS_PASSWORD') }}"
openshift_cloudprovider_openstack_tenant_name: "{{ lookup('env','OS_PROJECT_NAME') }}"
openshift_cloudprovider_openstack_blockstorage_version: v2
openshift_cloudprovider_openstack_domain_name: "{{ lookup('env','OS_USER_DOMAIN_NAME') }}"
openshift_cloudprovider_openstack_conf_file: <path_to_local_openstack_configuration_file>

#Use Cinder volume for Openshift registry:
openshift_hosted_registry_storage_kind: openstack
openshift_hosted_registry_storage_access_modes: ['ReadWriteOnce']
openshift_hosted_registry_storage_openstack_filesystem: xfs
openshift_hosted_registry_storage_volume_size: 30Gi


openshift_hosted_registry_storage_openstack_volumeID: d65209f0-9061-4cd8-8827-ae6e2253a18d
openshift_hostname_check: false
ansible_become: true

#Setting SDN (defaults to ovs-networkpolicy) not part of OSEv3.yml
#For more info, on which to choose, visit:
#https://docs.openshift.com/container-platform/3.11/architecture/networking/sdn.html#overview
networkPluginName: redhat/ovs-networkpolicy
#networkPluginName: redhat/ovs-multitenant

#Configuring identity providers with Ansible
#For initial cluster installations, the Deny All identity provider is configured
#by default. It is recommended to be configured with either htpasswd
#authentication, LDAP authentication, or Allowing all authentication (not recommended)
#For more info, visit:
#https://docs.openshift.com/container-platform/3.10/install_config/configuring_authentication.html#identity-providers-ansible
#Example of Allowing All
#openshift_master_identity_providers: [{'name': 'allow_all', 'login': 'true', 'challenge': 'true', 'kind': 'AllowAllPasswordIdentityProvider'}]


#Optional Metrics (uncomment below lines for installation)

#openshift_metrics_install_metrics: true
#openshift_metrics_cassandra_storage_type: dynamic
#openshift_metrics_storage_volume_size: 25Gi
#openshift_metrics_cassandra_nodeselector: {"node-role.kubernetes.io/infra":"true"}
#openshift_metrics_hawkular_nodeselector: {"node-role.kubernetes.io/infra":"true"}
#openshift_metrics_heapster_nodeselector: {"node-role.kubernetes.io/infra":"true"}

#Optional Aggregated Logging (uncomment below lines for installation)

#openshift_logging_install_logging: true
#openshift_logging_es_pvc_dynamic: true
#openshift_logging_es_pvc_size: 30Gi
#openshift_logging_es_cluster_size: 3
#openshift_logging_es_number_of_replicas: 1
#openshift_logging_es_nodeselector: {"node-role.kubernetes.io/infra":"true"}
#openshift_logging_kibana_nodeselector: {"node-role.kubernetes.io/infra":"true"}
#openshift_logging_curator_nodeselector: {"node-role.kubernetes.io/infra":"true"}

For further details on any of the variables listed, see an example OpenShift-Ansible host inventory.

OpenStack Prerequisites Playbook

The OpenShift Container Platform Ansible Installer provides a playbook to ensure all the provisioning steps of the OpenStack instances have been met.

Prior to running the playbook, ensure to source the RC file

$ source path/to/examplerc

Via the ansible-playbook command on the deployment host, ensure all the prerequisites are met using prerequisites.yml playbook:

$  ansible-playbook /usr/share/ansible/openshift-ansible/playbooks/openstack/openshift-cluster/prerequisites.yml

Once the prerequisite playbook completes successfully, run the provision playbook as follows:

$ ansible-playbook /usr/share/ansible/openshift-ansible/playbooks/openstack/openshift-cluster/provision.yml

If provision.yml prematurely errors, check if the status of the OpenStack stack and wait for it finish

$ watch openstack stack list
+--------------------------------------+-------------------+--------------------+----------------------+--------------+
| ID                                   | Stack Name        | Stack Status       | Creation Time        | Updated Time |
+--------------------------------------+-------------------+--------------------+----------------------+--------------+
| 87cb6d1c-8516-40fc-892b-49ad5cb87fac | openshift-cluster | CREATE_IN_PROGRESS | 2018-08-20T23:44:46Z | None         |
+--------------------------------------+-------------------+--------------------+----------------------+--------------+

If the stack shows a CREATE_IN_PROGRESS, wait for the stack to complete with a final result such as CREATE_COMPLETE. If the stack does complete successfully, re-run the provision.yml playbook for it to finish all the additional required steps.

If the stack shows a CREATE_FAILED, make sure to run the following command to see what caused the errors:

$ openstack stack failures list openshift-cluster

Stack Name Configuration

By default, the Heat stack that is created by OpenStack for the OpenShift Container Platform cluster is named openshift-cluster. If you want to use a different name then you must set the OPENSHIFT_CLUSTER environment variable before running the playbooks:

$ export OPENSHIFT_CLUSTER=openshift.example.com

If you use a non-default stack name and run the openshift-ansible playbooks to update your deployment, you must set OPENSHIFT_CLUSTER to your stack name to avoid errors.

Registering with Subscription Manager the OpenShift Container Platform Instances

With the nodes successfully provisioned, the next step is to ensure all the nodes are successfully registered via subscription-manager to install all the required packages for a successful OpenShift Container Platform installation. For simplicity, a repos.yml file has been created and provided.

$ cat ~/repos.yml
---
- name: Enable the proper repositories for OpenShift installation
  hosts: OSEv3
  become: yes
  tasks:
  - name: Register with activationkey and consume subscriptions matching Red Hat Cloud Suite or Red Hat OpenShift Container Platform
    redhat_subscription:
      state: present
      activationkey: <key-name>
      org_id: <orig_id>
      pool: '^(Red Hat Cloud Suite|Red Hat OpenShift Container Platform)$'

  - name: Disable all current repositories
    rhsm_repository:
      name: '*'
      state: disabled

  - name: Enable Repositories
    rhsm_repository:
      name: "{{ item }}"
      state: enabled
    with_items:
      - rhel-7-server-rpms
      - rhel-7-server-extras-rpms
      - rhel-7-server-ansible-2.6-rpms
      - rhel-7-server-ose-3.11-rpms

Refer to the Set Up Repositories to confirm the proper repositories and versions to enable. The above file is just a sample.

With the repos.yml, run the ansible-playbook command:

$ ansible-playbook repos.yml

The above example uses Ansible’s redhat_subscription and rhsm_repository modules for all registration, disabling and enabling of repositories. This specific example takes advantage of using a Red Hat activation key. If you don’t have an activation key, ensure to visit the Ansible redhat_subscription module to modify using a username and password instead as shown in the examples: https://docs.ansible.com/ansible/2.6/modules/redhat_subscription_module.html

At times, the redhat_subscription module may fail on certain nodes. If this issue occurs, please manually register that OpenShift Container Platform instance using subscription-manager.

Installing OpenShift Container Platform by Using an Ansible Playbook

With the OpenStack instances provisioned, the focus shifts to the installation OpenShift Container Platform. The installation and configuration is done via a series of Ansible playbooks and roles provided by the OpenShift RPM packages. Review the OSEv3.yml file that was previous configured to ensure all the options have been properly set.

Prior to running the installer playbook, ensure all the {rhocp} prerequisites are met via:

$ ansible-playbook /usr/share/ansible/openshift-ansible/playbooks/prerequisites.yml

Run the installer playbook to install Red Hat OpenShift Container Platform:

$ ansible-playbook /usr/share/ansible/openshift-ansible/playbooks/openstack/openshift-cluster/install.yml

OpenShift Container Platform version 3.11 is supported on RH OSP 14 and RH OSP 13. OpenShift Container Platform version 3.10 is supported on RH OSP 13.

Applying Configuration Changes to Existing OpenShift Container Platform Environment

Start or restart OpenShift Container Platform services on all master and node hosts to apply your configuration changes, see Restarting OpenShift Container Platform services:

# master-restart api
# master-restart controllers
# systemctl restart atomic-openshift-node

Kubernetes architecture expects reliable endpoints from cloud providers. When a cloud provider is down, the kubelet prevents OpenShift Container Platform from restarting. If the underlying cloud provider endpoints are not reliable, do not install a cluster that uses the cloud provider integration. Install the cluster as if it is a bare metal environment. It is not recommended to toggle cloud provider integration on or off in an installed cluster. However, if that scenario is unavoidable, then complete the following process.

Switching from not using a cloud provider to using a cloud provider produces an error message. Adding the cloud provider tries to delete the node because the node switches from using the hostname as the externalID (which would have been the case when no cloud provider was being used) to using the cloud provider’s instance-id (which is what the cloud provider specifies). To resolve this issue:

  1. Log in to the CLI as a cluster administrator.

  2. Check and back up existing node labels:

    $ oc describe node <node_name> | grep -Poz '(?s)Labels.*\n.*(?=Taints)'
  3. Delete the nodes:

    $ oc delete node <node_name>
  4. On each node host, restart the OpenShift Container Platform service.

    # systemctl restart atomic-openshift-node
  5. Add back any labels on each node that you previously had.

Configuring OpenStack Variables on an existing OpenShift Environment

To set the required OpenStack variables, modify the /etc/origin/cloudprovider/openstack.conf file with the following contents on all of your OpenShift Container Platform hosts, both masters and nodes:

[Global]
auth-url = <OS_AUTH_URL>
username = <OS_USERNAME>
password = <password>
domain-id = <OS_USER_DOMAIN_ID>
tenant-id = <OS_TENANT_ID>
region = <OS_REGION_NAME>

[LoadBalancer]
subnet-id = <UUID of the load balancer subnet>

Consult your OpenStack administrators for values of the OS_ variables, which are commonly used in OpenStack configuration.

Configuring Zone Labels for Dynamically Created OpenStack PVs

Administrators can configure zone labels for dynamically created OpenStack PVs. This option is useful if the OpenStack Cinder zone name does not match the compute zone names, for example, if there is only one Cinder zone and many compute zones. Administrators can create Cinder volumes dynamically and then check the labels.

To view the zone labels for the PVs:

# oc get pv --show-labels
NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS    CLAIM                 STORAGECLASS   REASON    AGE       LABELS
pvc-1faa6f93-64ac-11e8-930c-fa163e3c373c   1Gi        RWO            Delete           Bound     openshift-node/pvc1   standard                 12s       failure-domain.beta.kubernetes.io/zone=nova

The default setting is enabled. Using the oc get pv --show-labels command returns the failure-domain.beta.kubernetes.io/zone=nova label.

To disable the zone label, update the openstack.conf file by adding:

[BlockStorage]
ignore-volume-az = yes

The PVs created after restarting the master services will not have the zone label.