$ sudo subscription-manager repos \ --enable rhel-7-server-openstack-13-rpms $ sudo yum install -y python2-openstackclient python2-heatclient python2-octaviaclient ansible
When deployed on OpenStack, OKD can be configured to access the OpenStack infrastructure, including using OpenStack Cinder volumes as persistent storage for application data.
A successful deployment of OKD requires many prerequisites. This consists of a set of infrastructure and host configuration steps prior to the actual installation of OKD using Ansible. In the following subsequent sections, details regarding the prerequisites and configuration changes required for an OKD on a OpenStack environment are discussed in detail.
All the OpenStack CLI commands in this reference environment are executed
using the CLI openstack commands within the OpenStack director node. If using a
workstation or laptop to execute these commands instead of the OpenStack
director node, please ensure to install the following packages found
within the specified repositories.
|
example:
enable the rhel-7-server-openstack-13-rpms and the required OKD repositories from Set Up Repositories.
$ sudo subscription-manager repos \ --enable rhel-7-server-openstack-13-rpms $ sudo yum install -y python2-openstackclient python2-heatclient python2-octaviaclient ansible
Verify the packages are of at least the following versions (use rpm -q <package_name>
):
python2-openstackclient
- 3.14.1.-1
python2-heatclient
1.14.0-1
python2-octaviaclient
1.4.0-1
ansible > 2.4.3
Octavia is a supported load balancer solution that is recommended to be used in conjunction with OKD in order to load balance the external incoming traffic and provide a single view of the OKD master services for the applications.
In order to enable Octavia, the Octavia service must be included during the installation of the OpenStack overcloud or upgraded if the overcloud already exists. The following steps provide basic non-custom steps in enabling Octavia and apply to both either a clean install of the overcloud or an overcloud update.
The following steps only capture the key pieces required during the deployment of OpenStack when dealing with Octavia. For more information visit the documentation of Installation of OpenStack. It is also important to note that registry methods vary. For more information visit the documentation on Registry Methods. This example used the local registry method. |
If using the local registry, create a template to upload the images to the registry. example shown below.
(undercloud) $ openstack overcloud container image prepare \ -e /usr/share/openstack-tripleo-heat-templates/environments/services-docker/octavia.yaml \ --namespace=registry.access.redhat.com/rhosp13 \ --push-destination=<local-ip-from-undercloud.conf>:8787 \ --prefix=openstack- \ --tag-from-label {version}-{release} \ --output-env-file=/home/stack/templates/overcloud_images.yaml \ --output-images-file /home/stack/local_registry_images.yaml
Verify that the created local_registry_images.yaml contains the Octavia images.
... - imagename: registry.access.redhat.com/rhosp13/openstack-octavia-api:13.0-43 push_destination: <local-ip-from-undercloud.conf>:8787 - imagename: registry.access.redhat.com/rhosp13/openstack-octavia-health-manager:13.0-45 push_destination: <local-ip-from-undercloud.conf>:8787 - imagename: registry.access.redhat.com/rhosp13/openstack-octavia-housekeeping:13.0-45 push_destination: <local-ip-from-undercloud.conf>:8787 - imagename: registry.access.redhat.com/rhosp13/openstack-octavia-worker:13.0-44 push_destination: <local-ip-from-undercloud.conf>:8787 ...
The versions of the Octavia containers will vary depending upon the specific Red Hat OpenStack Platform release installed. |
The following step pulls the container images from registry.access.redhat.com to the undercloud node. This may take soem time depending on the speed of the network and undercloud disk.
(undercloud) $ sudo openstack overcloud container image upload \ --config-file /home/stack/local_registry_images.yaml \ --verbose
As an Octavia Load Balancer is used to access the OpenShift API, there is a need to increase their listeners default timeouts for the connections. The default timeout is 50 seconds. Increase the timeout to 20 minutes by passying the following file to the overcloud deploy command:
(undercloud) $ cat octavia_timeouts.yaml parameter_defaults: OctaviaTimeoutClientData: 1200000 OctaviaTimeoutMemberData: 1200000
This is not needed from Red Hat OpenStack Platform 14 and onwards. |
Install or update your overcloud environment with Octavia:
openstack overcloud deploy --templates \ . . . -e /usr/share/openstack-tripleo-heat-templates/environments/services-docker/octavia.yaml \ -e octavia_timeouts.yaml . . .
The command above only includes the files associated with Octavia. This command will vary based upon your specifc installation of OpenStack. See the official OpenStack documentation for further information. For more information on customizing your Octavia installation, see installation of Octavia using Director. |
If Kuryr SDN is used, the overcloud installation requires the "trunk" extension to be enabled at Neutron. This is enabled by default on Director deployments. Use the openvswitch firewall instead of the default ovs-hybrid when the Neutron backend is ML2/OVS. There is no need for modifications if the backend is ML2/OVN. |
Before installing OKD, the Red Hat OpenStack Platform (RHOSP)
environment requires a project, often referred to as a tenant,
that stores the OpenStack instances that are to install the OKD. This project
requires ownership by a user and the role of that user to be set to _member_
.
The following steps show how to accomplish the above.
As the OpenStack overcloud administrator,
Create a project (tenant) that is to store the RHOSP instances
$ openstack project create <project>
Create a RHOSP user that has ownership of the previously created project:
$ openstack user create --password <password> <username>
Set the role of the user:
$ openstack role add --user <username> --project <project> _member_
The default quotas assigned to new RH OSP projects are not high enough for OKD installations. Increase the quotas to at least 30 security groups, 200 security group rules, and 200 ports.
$ openstack quota set --secgroups 30 --secgroup-rules 200 --ports 200 *<project>*
Once the above is complete, an OpenStack administrator can create an RC file with all the required information to the user(s) implementing the OKD environment.
An example RC file:
$ cat path/to/examplerc # Clear any old environment that may conflict. for key in $( set | awk '{FS="="} /^OS_/ {print $1}' ); do unset $key ; done export OS_PROJeCT_DOMAIN_NAMe=Default export OS_USeR_DOMAIN_NAMe=Default export OS_PROJeCT_NAMe=<project-name> export OS_USeRNAMe=<username> export OS_PASSWORD=<password> export OS_AUTH_URL=http://<ip>:5000//v3 export OS_CLOUDNAMe=<cloud-name> export OS_IDeNTITY_API_VeRSION=3 # Add OS_CLOUDNAMe to PS1 if [ -z "${CLOUDPROMPT_eNABLeD:-}" ]; then export PS1=${PS1:-""} export PS1=\${OS_CLOUDNAMe:+"(\$OS_CLOUDNAMe)"}\ $PS1 export CLOUDPROMPT_eNABLeD=1 fi
Changing _OS_PROJeCT_DOMAIN_NAMe and _OS_USeR_DOMAIN_NAMe from the Default value is supported as long as both reference the same domain. |
As the user(s) implementing the OKD environment, within the OpenStack director
node or workstation, ensure to source
the credentials as follows:
$ source path/to/examplerc
Within OpenStack, flavors define the size of a virtual server by defining
the compute, memory, and storage capacity of nova
computing instances. Since
the base image within this reference architecture is Red Hat enterprise Linux 7.5, a
m1.node
and m1.master
sized flavor is created with the following specifications as
shown in Minimum System Requirements for OpenShift.
Although the minimum system requirements are sufficient to run a cluster, to improve performance, it is recommended to increase vCPU on master nodes. Additionally, more memory is recommended if etcd is co-located on the master nodes. |
Node Type | CPU | RAM | Root Disk | Flavor |
---|---|---|---|---|
Masters |
4 |
16 GB |
45 GB |
|
Nodes |
1 |
8 GB |
20 GB |
|
As an OpenStack administrator,
$ openstack flavor create <flavor_name> \ --id auto \ --ram <ram_in_MB> \ --disk <disk_in_GB> \ --vcpus <num_vcpus>
An example below showing the creation of flavors within this reference environment.
$ openstack flavor create m1.master \ --id auto \ --ram 16384 \ --disk 45 \ --vcpus 4 $ openstack flavor create m1.node \ --id auto \ --ram 8192 \ --disk 20 \ --vcpus 1
If access to OpenStack administrator privileges to create new flavors is unavailable, use existing flavors within the OpenStack environment that meet the requirements in Minimum System Requirements for OpenShift. |
Verification of the OpenStack flavors via:
$ openstack flavor list
Red Hat OpenStack Platform uses cloud-init
to place an ssh
public key on each instance as it is
created to allow ssh
access to the instance. Red Hat OpenStack Platform expects the user to hold
the private key.
Losing the private key will cause the inability to access the instances. |
To generate a keypair, use the following command:
$ openstack keypair create <keypair-name> > /path/to/<keypair-name>.pem
Verification of the keypair creation can be done via:
$ openstack keypair list
Once the keypair is created, set the permissions to 600
thus only allowing the
owner of the file to read and write to that file.
$ chmod 600 /path/to/<keypair-name>.pem
DNS service is an important component in the OKD environment. Regardless of the provider of DNS, an organization is required to have certain records in place to serve the various OKD components.
Using |
Using the key secret of the DNS, you can provide the information to the OpenShift Ansible Installer and it will automatically add A records for the target instances and the various OKD components. This process setup is described later when configuring the OpenShift Ansible Installer.
Access to a DNS server is expected. You can use Red Hat Labs DNS Helper for assistance with access.
Application DNS
Applications served by OpenShift are accessible by the router on ports 80/TCP and 443/TCP. The router uses a wildcard record to map all host names under a specific sub domain to the same IP address without requiring a separate record for each name.
This allows OKD to add applications with arbitrary names as long as they are under that sub domain.
For example, a wildcard record for *.apps.example.com
causes DNS name lookups
for tax.apps.example.com
and home-goods.apps.example.com
to both return the same IP address: 10.19.x.y
. All
traffic is forwarded to the OpenShift Routers. The Routers examine the HTTP
headers of the queries and forward them to the correct destination.
With a load-balancer such as Octavia, host address of 10.19.x.y, the wildcard DNS record can be added as follows:
IP Address | Hostname | Purpose |
---|---|---|
10.19.x.y |
|
User access to application web services |
When deploying OKD on Red Hat OpenStack Platform as described in this segment, the requirements are two networks — public and internal network.
Public Network
The public network is a network that contains external access and can be reached by the outside world. The public network creation can be only done by an OpenStack administrator.
The following commands provide an example of creating an OpenStack provider network for public network access.
As an OpenStack administrator (overcloudrc access),
$ source /path/to/examplerc $ openstack network create <public-net-name> \ --external \ --provider-network-type flat \ --provider-physical-network datacentre $ openstack subnet create <public-subnet-name> \ --network <public-net-name> \ --dhcp \ --allocation-pool start=<float_start_ip>,end=<float_end_ip> \ --gateway <ip> \ --subnet-range <CIDR>
Once the network and subnet have been created verify via:
$ openstack network list $ openstack subnet list
|
Internal Network
The internal network is connected to the public network via a router during
the network setup. This allows each Red Hat OpenStack Platform instance attached to the
internal network the ability to request a floating IP from the public network
for public access. The internal network is created automically by the OpenShift
Ansible installer via setting the openshift_openstack_private_network_name
. More
information regarding changes required for the OpenShift Ansible installer are
described later.
OpenStack networking allows the user to define inbound and outbound traffic filters that can be applied to each instance on a network. This allows the user to limit network traffic to each instance based on the function of the instance services and not depend on host based filtering. The OpenShift Ansible installer handles the proper creation of all the ports and services required for each type of host that is part of the OKD cluster except for the deployment host.
The following command creates an empty security group with no rules set for the deployment host.
$ source path/to/examplerc $ openstack security group create <deployment-sg-name>
Verify the creation of the security group:
$ openstack security group list
Deployment Host Security Group
The deployment instance only needs to allow inbound ssh
. This instance exists
to give operators a stable base to deploy, monitor and manage the OKD
environment.
Port/Protocol | Service | Remote source | Purpose |
---|---|---|---|
ICMP |
ICMP |
Any |
Allow ping, traceroute, etc. |
22/TCP |
SSH |
Any |
Secure shell login |
Creation of the above security group rules is as follows:
$ source /path/to/examplerc $ openstack security group rule create \ --ingress \ --protocol icmp \ <deployment-sg-name> $ openstack security group rule create \ --ingress \ --protocol tcp \ --dst-port 22 \ <deployment-sg-name>
Verification of the security group rules is as follows:
$ openstack security group rule list <deployment-sg-name> +--------------------------------------+-------------+-----------+------------+-----------------------+ | ID | IP Protocol | IP Range | Port Range | Remote Security Group | +--------------------------------------+-------------+-----------+------------+-----------------------+ | 7971fc03-4bfe-4153-8bde-5ae0f93e94a8 | icmp | 0.0.0.0/0 | | None | | b8508884-e82b-4ee3-9f36-f57e1803e4a4 | None | None | | None | | cb914caf-3e84-48e2-8a01-c23e61855bf6 | tcp | 0.0.0.0/0 | 22:22 | None | | e8764c02-526e-453f-b978-c5ea757c3ac5 | None | None | | None | +--------------------------------------+-------------+-----------+------------+-----------------------+
OpenStack Block Storage provides persistent block storage management via the
cinder
service. Block storage enables the OpenStack user to create a volume
that may be attached to different OpenStack instances.
The master and node instances contain a volume to store docker
images.
The purpose of the volume is to ensure that a large image or container does not
compromise node performance or abilities of the existing node.
A docker volume of a minimum of 15GB is required for running containers. This may need adjustment depending on the size and number of containers each node will run. |
The docker volume is created by the OpenShift Ansible installer via the variable
openshift_openstack_docker_volume_size
. More
information regarding changes required for the OpenShift Ansible installer are
described later.
The OpenShift image registry requires a cinder
volume to ensure that images are
saved in the event that the registry needs to migrate to another node. The
following steps show how to create the image registry via OpenStack. Once
the volume is created, the volume ID will be included in the OpenShift Ansible
Installer OSev3.yml file via the parameter
openshift_hosted_registry_storage_openstack_volumeID
as described later.
$ source /path/to/examplerc $ openstack volume create --size <volume-size-in-GB> <registry-name>
The registry volume size should be at least 30GB. |
Verify the creation of the volume.
$ openstack volume list ----------------------------------------+------------------------------------------------+ | ID | Name | Status | Size | Attached to | +--------------------------------------+-------------------------------------------------+ | d65209f0-9061-4cd8-8827-ae6e2253a18d | <regisry-name>| available | 30 | | +--------------------------------------+-------------------------------------------------+
The role of the deployment instance is to serve as a utility host for the deployment and management of OKD.
Creating the Deployment Host Network and Router
Prior to instance creation, an internal network and router must be created for communication with the deployment host. The following commands create that network and router.
$ source path/to/examplerc $ openstack network create <deployment-net-name> $ openstack subnet create --network <deployment-net-name> \ --subnet-range <subnet_range> \ --dns-nameserver <dns-ip> \ <deployment-subnet-name> $ openstack router create <deployment-router-name> $ openstack router set --external-gateway <public-net-name> <deployment-router-name> $ openstack router add subnet <deployment-router-name> <deployment-subnet-name>
Deploying the Deployment Instance
With the network and security group created, deploy the instance.
$ domain=<domain> $ netid1=$(openstack network show <deployment-net-name> -f value -c id) $ openstack server create \ --nic net-id=$netid1 \ --flavor <flavor> \ --image <image> \ --key-name <keypair> \ --security-group <deployment-sg-name> \ deployment.$domain
If the m1.small flavor does not exist by default then use an existing
flavor that meets the requirements of 1 vCPU and 2GB of RAM.
|
Creating and Adding Floating IP to the Deployment Instance
Once the deployment instance is created, a floating IP must be created and then allocated to the instance. The following shows an example.
$ source /path/to/examplerc $ openstack floating ip create <public-network-name> +---------------------+--------------------------------------+ | Field | Value | +---------------------+--------------------------------------+ | created_at | 2017-08-24T22:44:03Z | | description | | | fixed_ip_address | None | | floating_ip_address | 10.20.120.150 | | floating_network_id | 084884f9-d9d2-477a-bae7-26dbb4ff1873 | | headers | | | id | 2bc06e39-1efb-453e-8642-39f910ac8fd1 | | port_id | None | | project_id | ca304dfee9a04597b16d253efd0e2332 | | project_id | ca304dfee9a04597b16d253efd0e2332 | | revision_number | 1 | | router_id | None | | status | DOWN | | updated_at | 2017-08-24T22:44:03Z | +---------------------+--------------------------------------+
Within the above output, the floating_ip_address
field shows that the floating
IP 10.20.120.150
is created. In order to assign this IP to the deployment instance,
run the following command:
$ source /path/to/examplerc $ openstack server add floating ip <deployment-instance-name> <ip>
For example, if instance deployment.example.com
is to be assigned IP
10.20.120.150
the command would be:
$ source /path/to/examplerc $ openstack server add floating ip deployment.example.com 10.20.120.150
Adding the RC File to the Deployment Host
Once the deployment host exists, copy the RC file created earlier to the
deployment host via scp
as follows
scp <rc-file-deployment-host> cloud-user@<ip>:/home/cloud-user/
The following subsections describe all the steps needed to properly configure the deployment instance.
Configure ~/.ssh/config to use Deployment Host as a Jumphost
To easily connect to the OKD environment, follow the steps below.
On the OpenStack director node or local workstation with the private key, <keypair-name>.pem:
$ exec ssh-agent bash $ ssh-add /path/to/<keypair-name>.pem Identity added: /path/to/<keypair-name>.pem (/path/to/<keypair-name>.pem)
Add to the ~/.ssh/config
file:
Host deployment HostName <deployment_fqdn_hostname OR IP address> User cloud-user IdentityFile /path/to/<keypair-name>.pem ForwardAgent yes
ssh
into the deployment host with the -A
option that enables forwarding of
the authentication agent connection.
ensure the permissions are read write only for the owner of the ~/.ssh/config file:
$ chmod 600 ~/.ssh/config
$ ssh -A cloud-user@deployment
Once logged into the deployment host, verify the ssh agent forwarding is working
via checking for the SSH_AUTH_SOCK
$ echo "$SSH_AUTH_SOCK" /tmp/ssh-NDFDQD02qB/agent.1387
Subscription Manager and enabling OKD Repositories
Within the deployment instance, register it with the Red Hat Subscription Manager. This can be accomplished by using credentials:
$ sudo subscription-manager register --username <user> --password '<password>'
Alternatively, you can use an activation key:
$ sudo subscription-manager register --org="<org_id>" --activationkey=<keyname>
Once registered, enable the following repositories as follows.
$ sudo subscription-manager repos \ --enable="rhel-7-server-rpms" \ --enable="rhel-7-server-extras-rpms" \ --enable="rhel-7-server-ose-3.10-rpms" \ --enable="rhel-7-server-ansible-2.4-rpms" \ --enable="rhel-7-server-openstack-13-rpms" \ --enable="rhel-7-server-openstack-13-tools-rpms"
Refer to the Set Up Repositories to confirm the proper OKD repositories and Ansible versions to enable. The above file is just a sample. |
Required Packages on the Deployment Host
The following packages are required to be installed on the deployment host.
Install the following packages:
openshift-ansible
python-openstackclient
python2-heatclient
python2-octaviaclient
python2-shade
python-dns
git
ansible
$ sudo yum -y install openshift-ansible python-openstackclient python2-heatclient python2-octaviaclient python2-shade python-dns git ansible
Configure Ansible
ansible
is installed on the deployment instance to perform the registration,
installation of packages, and the deployment of the OKD environment on the
master and node instances.
Before running playbooks, it is important to create an ansible.cfg file to reflect the environment you wish to deploy:
$ cat ~/ansible.cfg [defaults] forks = 20 host_key_checking = False remote_user = openshift gathering = smart fact_caching = jsonfile fact_caching_connection = $HOMe/ansible/facts fact_caching_timeout = 600 log_path = $HOMe/ansible.log nocows = 1 callback_whitelist = profile_tasks inventory = /usr/share/ansible/openshift-ansible/playbooks/openstack/inventory.py,/home/cloud-user/inventory [ssh_connection] ssh_args = -o ControlMaster=auto -o ControlPersist=600s -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=false control_path = %(directory)s/%%h-%%r pipelining = True timeout = 10 [persistent_connection] connect_timeout = 30 connect_retries = 30 connect_interval = 1
The following parameters values are important to the ansible.cfg file.
example: inventory = path/to/inventory1,path/to/inventory2 |
The code block above can overwrite the default values in the file. ensure to populate <keypair-name> with the keypair that was copied to the deployment instance.
The inventory folder is created in Preparing the Inventory for Provisioning. |
OpenShift Authentication
OKD provides the ability to use many different authentication platforms. A listing of authentication options are available at Configuring Authentication and User Agent.
Configuring the default identity provider is important as the default configuration is to Deny All.
Once the creation and configuration of the deployment host is complete, we turn to preparing the environment for the deployment of OKD using Ansible. In the following subsections, Ansible is configured and certain YAML files are modified to achieve a successful OKD on OpenStack deployment.
With the installation of the openshift-ansible
package complete via our
previous steps, there resides a
sample-inventory
directory that we will copy to our cloud-user
home directory
of the deployment host.
On the deployment host,
$ cp -r /usr/share/ansible/openshift-ansible/playbooks/openstack/sample-inventory/ ~/inventory
Within this inventory directory, the all.yml file contains all the different parameters that must be set in to order to achieve successful provisioning of the RHOCP instances. The OSev3.yml file contains some references required by the all.yml file and all the available OKD cluster parameters that you can customize.
The all.yml has many options that can be modified to meet your specific needs. The information gathered in this file is for the provisioning portion of the instances required for a successful deployment of OKD. It is important to review these carefully. This document will provide a condensed version of the all.yml and focus on the most critical parameters that need to be set for a successful deployment.
$ cat ~/inventory/group_vars/all.yml --- openshift_openstack_clusterid: "openshift" openshift_openstack_public_dns_domain: "example.com" openshift_openstack_dns_nameservers: ["10.19.115.228"] openshift_openstack_public_hostname_suffix: "-public" openshift_openstack_nsupdate_zone: "{{ openshift_openstack_public_dns_domain }}" openshift_openstack_keypair_name: "openshift" openshift_openstack_external_network_name: "public" openshift_openstack_default_image_name: "rhel75" Optional (Recommended) - This removes the need for floating IPs on the OpenShift Cluster nodes openshift_openstack_node_network_name: <deployment-net-name> openshift_openstack_node_subnet_name: <deployment-subnet-name> openshift_openstack_router_name: <deployment-router-name> openshift_openstack_master_floating_ip: false openshift_openstack_infra_floating_ip: false openshift_openstack_compute_floating_ip: false end of Optional Floating IP section openshift_openstack_num_masters: 3 openshift_openstack_num_infra: 3 openshift_openstack_num_cns: 0 openshift_openstack_num_nodes: 2 openshift_openstack_master_flavor: "m1.master" openshift_openstack_default_flavor: "m1.node" openshift_openstack_use_lbaas_load_balancer: true openshift_openstack_docker_volume_size: "15" # # Roll-your-own DNS openshift_openstack_external_nsupdate_keys: public: key_secret: '/alb8h0eAFWvb4i+CMA12w==' key_name: "update-key" key_algorithm: 'hmac-md5' server: '<ip-of-DNS>' private: key_secret: '/alb8h0eAFWvb4i+CMA12w==' key_name: "update-key" key_algorithm: 'hmac-md5' server: '<ip-of-DNS>' ansible_user: openshift cloud config openshift_openstack_disable_root: true openshift_openstack_user: openshift
Due to using an external DNS server, the private and public sections use the public IP address of the DNS server as the DNS server does not reside in the OpenStack environment. |
The values above that are enclosed by asterisks (*) require modification based upon your OpenStack environment and DNS server.
In order to properly modify the DNS portion of the all.yml, login to the DNS server and perform the following commands to capture the key name, key algorithm and key secret:
$ ssh <ip-of-DNS> $ sudo -i # cat /etc/named/<key-name.key> key "update-key" { algorithm hmac-md5; secret "/alb8h0eAFWvb4i+CMA02w=="; };
The key name may vary and the above is only an example. |
The following [filename]all.yaml file enables Kuryr SDN instead of the default openshift-sdn. Note that the example below is a condensed version and it is important to review the default template carefully.
$ cat ~/inventory/group_vars/all.yml --- openshift_openstack_clusterid: "openshift" openshift_openstack_public_dns_domain: *"example.com"* openshift_openstack_dns_nameservers: *["10.19.115.228"]* openshift_openstack_public_hostname_suffix: "-public" openshift_openstack_nsupdate_zone: "{{ openshift_openstack_public_dns_domain }}" openshift_openstack_keypair_name: *"openshift"* openshift_openstack_external_network_name: *"public"* openshift_openstack_default_image_name: *"rhel75"* ## Optional (Recommended) - This removes the need for floating IPs ## on the OpenShift Cluster nodes openshift_openstack_node_network_name: *<deployment-net-name>* openshift_openstack_node_subnet_name: *<deployment-subnet-name>* openshift_openstack_router_name: *<deployment-router-name>* openshift_openstack_master_floating_ip: *false* openshift_openstack_infra_floating_ip: *false* openshift_openstack_compute_floating_ip: *false* ## end of Optional Floating IP section openshift_openstack_num_masters: *3* openshift_openstack_num_infra: *3* openshift_openstack_num_cns: *0* openshift_openstack_num_nodes: *2* openshift_openstack_master_flavor: *"m1.master"* openshift_openstack_default_flavor: *"m1.node"* ## Kuryr configuration openshift_use_kuryr: True openshift_use_openshift_sdn: False use_trunk_ports: True os_sdn_network_plugin_name: cni openshift_node_proxy_mode: userspace kuryr_openstack_pool_driver: nested openshift_kuryr_precreate_subports: 5 kuryr_openstack_public_net_id: *<public_ID>* # Select kuryr image (always latest available) openshift_openstack_kuryr_controller_image: registry.access.redhat.com/rhosp14/openstack-kuryr-controller:latest openshift_openstack_kuryr_cni_image: registry.access.redhat.com/rhosp14/openstack-kuryr-cni:latest openshift_master_open_ports: - service: dns tcp port: 53/tcp - service: dns udp port: 53/udp openshift_node_open_ports: - service: dns tcp port: 53/tcp - service: dns udp port: 53/udp # end of Kuryr configuration openshift_openstack_use_lbaas_load_balancer: *true* openshift_openstack_docker_volume_size: "15" # # Roll-your-own DNS *openshift_openstack_external_nsupdate_keys:* public: *key_secret: '/alb8h0eAFWvb4i+CMA12w=='* *key_name: "update-key"* *key_algorithm: 'hmac-md5'* *server: '<ip-of-DNS>'* private: *key_secret: '/alb8h0eAFWvb4i+CMA12w=='* *key_name: "update-key"* *key_algorithm: 'hmac-md5'* *server: '<ip-of-DNS>'* ansible_user: openshift ## cloud config openshift_openstack_disable_root: true openshift_openstack_user: openshift
Use the latest supported kuryr images, regardless of the overcloud Red Hat OpenStack version. For instance, use kuryr images from OSP 14, whether the overcloud is OSP 14 or OSP 13. Kuryr is just another workload on top of the overcloud, and it aligns better with new OpenShift features if you use the latest images. |
Network policies, namespace isolation and nodeport services are not supported when Kuryr SDN is enabled. |
Brief description of each variable in the table below:
Variable | Description |
---|---|
openshift_openstack_clusterid |
Cluster identification name |
openshift_openstack_public_dns_domain |
Public DNS domain name |
openshift_openstack_dns_nameservers |
IP of DNS nameservers |
openshift_openstack_public_hostname_suffix |
Adds a suffix to the node hostname in the DNS record for both public and private |
openshift_openstack_nsupdate_zone |
Zone to be updated with OCP instance IPs |
openshift_openstack_keypair_name |
Keypair name used to log into OCP instances |
openshift_openstack_external_network_name |
OpenStack public network name |
openshift_openstack_default_image_name |
OpenStack image used for OCP instances |
openshift_openstack_num_masters |
Number of master nodes to deploy |
openshift_openstack_num_infra |
Number of infrastructure nodes to deploy |
openshift_openstack_num_cns |
Number of container native storage nodes to deploy |
openshift_openstack_num_nodes |
Number of application nodes to deploy |
openshift_openstack_master_flavor |
Name of the OpenStack flavor used for master instances |
openshift_openstack_default_flavor |
Name of the Openstack flavor used for all instances, if specific flavor not specified. |
openshift_openstack_use_lbaas_load_balancer |
Boolean value enabling Octavia load balancer (Octavia must be installed) |
openshift_openstack_docker_volume_size |
Minimum size of the Docker volume (required variable) |
openshift_openstack_external_nsupdate_keys |
Updating the DNS with the instance IP addresses |
ansible_user |
Ansible user used to deploy OKD. "openshift" is the required name and must not be changed. |
openshift_openstack_disable_root |
Boolean value that disables root access |
openshift_openstack_user |
OCP instances created with this user |
openshift_openstack_node_network_name |
Name of existing OpenShift network to use for deployment. This should be the same network name used for your deployment host. |
openshift_openstack_node_subnet_name |
Name of existing OpenShift subnet to use for deployment. This should be the same subnet name used for your deployment host. |
openshift_openstack_router_name |
Name of existing OpenShift router to use for deployment. This should be the same router name used for your deployment host. |
openshift_openstack_master_floating_ip |
Default is |
openshift_openstack_infra_floating_ip |
Default is |
openshift_openstack_compute_floating_ip |
Default is |
openshift_use_openshift_sdn |
Must set to |
openshift_use_kuryr |
Must set to |
use_trunk_ports |
Must be set to |
os_sdn_network_plugin_name |
selection of the SDN behavior. Must set to |
openshift_node_proxy_mode |
Must set to |
openshift_master_open_ports |
Ports to be opened on the VMs when using Kuryr |
kuryr_openstack_public_net_id |
Need by Kuryr. ID of the public OpenStack network from where FIPs are obtained |
The OSev3.yml file specificies all the different parameters and customizations relating the installation of OpenShift.
Below is a condensed version of the file with all required variables for a successful deployment. Additional variables may be required depending on what customization is required for your specific OKD deployment.
$ cat ~/inventory/group_vars/OSev3.yml --- openshift_deployment_type: openshift-enterprise openshift_release: v3.10 openshift_master_default_subdomain: "apps.{{ (openshift_openstack_clusterid|trim == '') | ternary(openshift_openstack_public_dns_domain, openshift_openstack_clusterid + '.' + openshift_openstack_public_dns_domain) }}" openshift_master_cluster_public_hostname: "console.{{ (openshift_openstack_clusterid|trim == '') | ternary(openshift_openstack_public_dns_domain, openshift_openstack_clusterid + '.' + openshift_openstack_public_dns_domain) }}" OpenStack Credentials: openshift_cloudprovider_kind: openstack openshift_cloudprovider_openstack_auth_url: "{{ lookup('env','OS_AUTH_URL') }}" openshift_cloudprovider_openstack_username: "{{ lookup('env','OS_USeRNAMe') }}" openshift_cloudprovider_openstack_password: "{{ lookup('env','OS_PASSWORD') }}" openshift_cloudprovider_openstack_tenant_name: "{{ lookup('env','OS_PROJeCT_NAMe') }}" openshift_cloudprovider_openstack_blockstorage_version: v2 openshift_cloudprovider_openstack_domain_name: "{{ lookup('env','OS_USeR_DOMAIN_NAMe') }}" # Use Cinder volume for Openshift registry: openshift_hosted_registry_storage_kind: openstack openshift_hosted_registry_storage_access_modes: ['ReadWriteOnce'] openshift_hosted_registry_storage_openstack_filesystem: xfs openshift_hosted_registry_storage_volume_size: 30Gi openshift_hosted_registry_storage_openstack_volumeID: d65209f0-9061-4cd8-8827-ae6e2253a18d openshift_hostname_check: false ansible_become: true #Setting SDN (defaults to ovs-networkpolicy) not part of OSev3.yml #For more info, on which to choose, visit: #https://docs.openshift.com/container-platform/3.10/architecture/networking/sdn.html#overview networkPluginName: redhat/ovs-networkpolicy #networkPluginName: redhat/ovs-multitenant #Configuring identity providers with Ansible #For initial cluster installations, the Deny All identity provider is configured #by default. It is recommended to be configured with either htpasswd #authentication, LDAP authentication, or Allowing all authentication (not recommended) #For more info, visit: #https://docs.openshift.com/container-platform/3.10/install_config/configuring_authentication.html#identity-providers-ansible #example of Allowing All #openshift_master_identity_providers: [{'name': 'allow_all', 'login': 'true', 'challenge': 'true', 'kind': 'AllowAllPasswordIdentityProvider'}] #Optional Metrics (uncomment below lines for installation) #openshift_metrics_install_metrics: true #openshift_metrics_cassandra_storage_type: dynamic #openshift_metrics_storage_volume_size: 25Gi #openshift_metrics_cassandra_nodeselector: {"node-role.kubernetes.io/infra":"true"} #openshift_metrics_hawkular_nodeselector: {"node-role.kubernetes.io/infra":"true"} #openshift_metrics_heapster_nodeselector: {"node-role.kubernetes.io/infra":"true"} #Optional Aggregated Logging (uncomment below lines for installation) #openshift_logging_install_logging: true #openshift_logging_es_pvc_dynamic: true #openshift_logging_es_pvc_size: 30Gi #openshift_logging_es_cluster_size: 3 #openshift_logging_es_number_of_replicas: 1 #openshift_logging_es_nodeselector: {"node-role.kubernetes.io/infra":"true"} #openshift_logging_kibana_nodeselector: {"node-role.kubernetes.io/infra":"true"} #openshift_logging_curator_nodeselector: {"node-role.kubernetes.io/infra":"true"}
For further details on any of the variables listed, see an example OpenShift-Ansible host inventory.
The OKD Ansible Installer provides a playbook to ensure all the provisioning steps of the OpenStack instances have been met.
Prior to running the playbook, ensure to source the RC file
$ source path/to/examplerc
Via the ansible-playbook
command on the deployment host, ensure all the
prerequisites are met using prerequisites.yml
playbook:
$ ansible-playbook /usr/share/ansible/openshift-ansible/playbooks/openstack/openshift-cluster/prerequisites.yml
Once the prerequisite playbook completes successfully, run the provision playbook as follows:
$ ansible-playbook /usr/share/ansible/openshift-ansible/playbooks/openstack/openshift-cluster/provision.yml
If provision.yml prematurely errors, check if the status of the OpenStack stack and wait for it finish $ watch openstack stack list +--------------------------------------+-------------------+--------------------+----------------------+--------------+ | ID | Stack Name | Stack Status | Creation Time | Updated Time | +--------------------------------------+-------------------+--------------------+----------------------+--------------+ | 87cb6d1c-8516-40fc-892b-49ad5cb87fac | openshift-cluster | CReATe_IN_PROGReSS | 2018-08-20T23:44:46Z | None | +--------------------------------------+-------------------+--------------------+----------------------+--------------+ If the stack shows a If the stack shows a $ openstack stack failures list openshift-cluster |
With the nodes successfully provisioned, the next step is to ensure all the
nodes are successfully registered via subscription-manager
to install all the
required packages for a successful OKD installation. For simplicity,
a repos.yml file has been created and provided.
$ cat ~/repos.yml --- - name: enable the proper repositories for OpenShift installation hosts: OSev3 become: yes tasks: - name: Register with activationkey and consume subscriptions matching Red Hat Cloud Suite or Red Hat OpenShift Container Platform redhat_subscription: state: present activationkey: <key-name> org_id: <orig_id> pool: '^(Red Hat Cloud Suite|Red Hat OpenShift Container Platform)$' - name: Disable all current repositories rhsm_repository: name: '*' state: disabled - name: enable Repositories rhsm_repository: name: "{{ item }}" state: enabled with_items: - rhel-7-server-rpms - rhel-7-server-extras-rpms - rhel-7-server-ansible-2.4-rpms - rhel-7-server-ose-3.10-rpms
Refer to the Set Up Repositories to confirm the proper repositories and versions to enable. The above file is just a sample. |
With the repos.yml, run the ansible-playbook
command:
$ ansible-playbook repos.yml
The above example uses Ansible’s redhat_subscription
and rhsm_repository
modules for all registration, disabling and enabling of repositories. This
specific example takes advantage of using a Red Hat activation key. If you don’t
have an activation key, ensure to visit the Ansible redhat_subscription
module
to modify using a username and password instead as shown in the examples:
https://docs.ansible.com/ansible/2.6/modules/redhat_subscription_module.html
At times, the |
With the OpenStack instances provisioned, the focus shifts to the installation OKD. The installation and configuration is done via a series of Ansible playbooks and roles provided by the OpenShift RPM packages. Review the OSev3.yml file that was previous configured to ensure all the options have been properly set.
Prior to running the installer playbook, ensure all the {rhocp} prerequisites are met via:
$ ansible-playbook /usr/share/ansible/openshift-ansible/playbooks/prerequisites.yml
Run the installer playbook to install Red Hat OpenShift Container Platform:
$ ansible-playbook /usr/share/ansible/openshift-ansible/playbooks/openstack/openshift-cluster/install.yml
{product-tittle} version 3.11 is supported on RH OSP 14 and RH OSP 13. {product-tittle} version 3.10 is supported on RH OSP 13. |
Start or restart OKD services on all master and node hosts to apply your configuration changes, see Restarting OKD services:
# master-restart api # master-restart controllers # systemctl restart atomic-openshift-node
Switching from not using a cloud provider to using a cloud provider produces an
error message. Adding the cloud provider tries to delete the node because the
node switches from using the hostname as the externalID
(which would have
been the case when no cloud provider was being used) to using the cloud
provider’s instance-id
(which is what the cloud provider specifies). To
resolve this issue:
Log in to the CLI as a cluster administrator.
Check and back up existing node labels:
$ oc describe node <node_name> | grep -Poz '(?s)Labels.*\n.*(?=Taints)'
Delete the nodes:
$ oc delete node <node_name>
On each node host, restart the OKD service.
# systemctl restart origin-node
Add back any labels on each node that you previously had.
To set the required OpenStack variables, modify the /etc/origin/cloudprovider/openstack.conf file with the following contents on all of your OKD hosts, both masters and nodes:
[Global] auth-url = <OS_AUTH_URL> username = <OS_USeRNAMe> password = <password> domain-id = <OS_USeR_DOMAIN_ID> tenant-id = <OS_TeNANT_ID> region = <OS_ReGION_NAMe> [LoadBalancer] subnet-id = <UUID of the load balancer subnet>
Consult your OpenStack administrators for values of the OS_
variables, which
are commonly used in OpenStack configuration.
Administrators can configure zone labels for dynamically created OpenStack PVs. This option is useful if the OpenStack Cinder zone name does not match the compute zone names, for example, if there is only one Cinder zone and many compute zones. Administrators can create Cinder volumes dynamically and then check the labels.
To view the zone labels for the PVs:
# oc get pv --show-labels NAMe CAPACITY ACCeSS MODeS ReCLAIM POLICY STATUS CLAIM STORAGeCLASS ReASON AGe LABeLS pvc-1faa6f93-64ac-11e8-930c-fa163e3c373c 1Gi RWO Delete Bound openshift-node/pvc1 standard 12s failure-domain.beta.kubernetes.io/zone=nova
The default setting is enabled. Using the oc get pv --show-labels
command returns the failure-domain.beta.kubernetes.io/zone=nova
label.
To disable the zone label, update the openstack.conf file by adding:
[BlockStorage] ignore-volume-az = yes
The PVs created after restarting the master services will not have the zone label.