$ openstack flavor create \
--id auto \
--ram 8192 \
--disk 0 \
--ephemeral 100 \
--vcpus 4 \
--public \
hcp-etcd-ephemeral
Deploying hosted control planes clusters on OpenStack is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope. |
You can deploy hosted control planes with hosted clusters that run on OpenStack 17.1.
A hosted cluster is an OKD cluster with its API endpoint and control plane that are hosted on a management cluster. With hosted control planes, control planes exist as pods on a management cluster without the need for dedicated virtual or physical machines for each control plane.
Before you create a hosted cluster on OpenStack, ensure that you meet the following requirements:
You have administrative access to a management OKD cluster version 4.17 or greater. This cluster can run on bare metal, OpenStack, or a supported public cloud.
The HyperShift Operator is installed on the management cluster as specified in "Preparing to deploy hosted control planes".
The management cluster is configured with OVN-Kubernetes as the default pod network CNI.
The OpenShift CLI (oc
) and hosted control planes CLI, hcp
are installed.
A load-balancer backend, for example, Octavia, is installed on the management OCP cluster. The load balancer is required for the kube-api
service to be created for each hosted cluster.
When ingress is configured with an Octavia load balance, the OpenStack Octavia service is running in the cloud that hosts the guest cluster.
A valid pull secret file is present for the quay.io/openshift-release-dev
repository.
The default external network for the management cluster is reachable from the guest cluster. The kube-apiserver
load-balancer type service is created on this network.
If you use a pre-defined floating IP address for ingress, you created a DNS record that points to it for the following wildcard domain: *.apps.<cluster_name>.<base_domain>
, where:
<cluster_name>
is the name of the management cluster.
<base_domain>
is the parent DNS domain under which your cluster’s applications live.
In a Hosted Control Plane (HCP) deployment on OpenStack, you can improve etcd performance by using local ephemeral storage that is provisioned with the TopoLVM CSI driver instead of relying on the default Cinder-based Persistent Volume Claims (PVCs).
You have access to a management cluster with HyperShift installed.
You can create and manage OpenStack flavors and machine sets.
You have the oc
and openstack
CLI tools installed and configured.
You are familiar with TopoLVM and Logical Volume Manager (LVM) storage concepts.
You installed the LVM Storage Operator on the management cluster. For more information, see "Installing LVM Storage by using the CLI" in the Storage section of the OKD documentation.
Create a Nova flavor with an additional ephemeral disk by using the openstack
CLI. For example:
$ openstack flavor create \
--id auto \
--ram 8192 \
--disk 0 \
--ephemeral 100 \
--vcpus 4 \
--public \
hcp-etcd-ephemeral
Nova automatically attaches the ephemeral disk to the instance and formats it as |
Create a compute machine set that uses the new flavor. For more information, see "Creating a compute machine set on OpenStack" in the OKD documentation.
Scale the machine set to meet your requirements. If clusters are deployed for high availability, a minimum of 3 workers must be deployed so the pods can be distributed accordingly.
Label the new worker nodes to identify them for etcd use. For example:
$ oc label node <node_name> hypershift-capable=true
This label is arbitrary; you can update it later.
In a file called lvmcluster.yaml
, create the following LVMCluster
custom resource to the local storage
configuration for etcd:
apiVersion: lvm.topolvm.io/v1alpha1
kind: LVMCluster
metadata:
name: etcd-hcp
namespace: openshift-storage
spec:
storage:
deviceClasses:
- name: etcd-class
default: true
nodeSelector:
nodeSelectorTerms:
- matchExpressions:
- key: hypershift-capable
operator: In
values:
- "true"
deviceSelector:
forceWipeDevicesAndDestroyAllData: true
paths:
- /dev/vdb
In this example resource:
The ephemeral disk location is /dev/vdb
, which is the case in most situations. Verify that this location is true in your case, and note that symlinks are not supported.
The parameter forceWipeDevicesAndDestroyAllData
is set to a True
value because the default Nova ephemeral disk
comes formatted in VFAT.
Apply the LVMCluster
resource by running the following command:
oc apply -f lvmcluster.yaml
Verify the LVMCluster
resource by running the following command:
$ oc get lvmcluster -A
NAMESPACE NAME STATUS
openshift-storage etcd-hcp Ready
Verify the StorageClass
resource by running the following command:
$ oc get storageclass
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
lvms-etcd-class topolvm.io Delete WaitForFirstConsumer true 23m
standard-csi (default) cinder.csi.openstack.org Delete WaitForFirstConsumer true 56m
You can now deploy a hosted cluster with a performant etcd configuration. The deployment process is described in "Creating a hosted cluster on OpenStack".
If you want to make ingress available in a hosted cluster without manual intervention, you can create a floating IP address for it in advance.
You have access to the OpenStack cloud.
If you use a pre-defined floating IP address for ingress, you created a DNS record that points to it for the following wildcard domain: *.apps.<cluster_name>.<base_domain>
, where:
<cluster_name>
is the name of the management cluster.
<base_domain>
is the parent DNS domain under which your cluster’s applications live.
Create a floating IP address by running the following command:
$ openstack floating ip create <external_network_id>
where:
<external_network_id>
Specifies the ID of the external network.
If you specify a floating IP address by using the |
If you want to specify which FCOS image to use when deploying node pools on and hosted control planes and OpenStack deployment, upload the image to the OpenStack cloud. If you do not upload the image, the OpenStack Resource Controller (ORC) downloads an image from the OKD mirror and deletes it when the hosted cluster is deleted.
You downloaded the FCOS image from the OKD mirror.
You have access to your OpenStack cloud.
Upload an FCOS image to OpenStack by running the following command:
$ openstack image create --disk-format qcow2 --file <image_file_name> rhcos
where:
<image_file_name>
Specifies the file name of the FCOS image.
You can create a hosted cluster on OpenStack by using the hcp
CLI.
You completed all prerequisite steps in "Preparing to deploy hosted control planes".
You reviewed "Prerequisites for OpenStack".
You completed all steps in "Preparing the management cluster for etcd local storage".
You have access to the management cluster.
You have access to the OpenStack cloud.
Create a hosted cluster by running the hcp create
command. For example, for a cluster that takes advantage of the performant etcd configuration detailed in "Preparing the management cluster for etcd local storage", enter:
$ hcp create cluster openstack \
--name my-hcp-cluster \
--openstack-node-flavor m1.xlarge \
--base-domain example.com \
--pull-secret /path/to/pull-secret.json \
--release-image quay.io/openshift-release-dev/ocp-release:4.19.0-x86_64 \
--node-pool-replicas 3 \
--etcd-storage-class lvms-etcd-class
Many options are available at cluster creation. For OpenStack-specific options, see "Options for creating a Hosted Control Planes cluster on OpenStack". For general options, see the hcp documentation.
|
Verify that the hosted cluster is ready by running the following command on it:
$ oc -n clusters-<cluster_name> get pods
where:
<cluster_name>
Specifies the name of the cluster.
After several minutes, the output should show that the hosted control plane pods are running.
NAME READY STATUS RESTARTS AGE
capi-provider-5cc7b74f47-n5gkr 1/1 Running 0 3m
catalog-operator-5f799567b7-fd6jw 2/2 Running 0 69s
certified-operators-catalog-784b9899f9-mrp6p 1/1 Running 0 66s
cluster-api-6bbc867966-l4dwl 1/1 Running 0 66s
...
...
...
redhat-operators-catalog-9d5fd4d44-z8qqk 1/1 Running 0
To validate the etcd configuration of the cluster:
Validate the etcd persistent volume claim (PVC) by running the following command:
$ oc get pvc -A
Inside the hosted control planes etcd pod, confirm the mount path and device by running the following command:
$ df -h /var/lib
The OpenStack resources that the cluster API (CAPI) provider creates are tagged with the label You can define additional tags for the resources as values in the |
You can supply several options to the hcp
CLI while deploying a Hosted Control Planes Cluster on OpenStack.
Option | Description | Required |
---|---|---|
|
Path to the OpenStack CA certificate file. If not provided, this will be automatically extracted from the cloud entry in |
No |
|
Name of the cloud entry in |
No |
|
Path to the OpenStack credentials file. If not provided,
|
No |
|
List of DNS server addresses that are provided when creating the subnet. |
No |
|
ID of the OpenStack external network. |
No |
|
A floating IP for OpenShift ingress. |
No |
|
Additional ports to attach to nodes. Valid values are: |
No |
|
Availability zone for the node pool. |
No |
|
Flavor for the node pool. |
Yes |
|
Image name for the node pool. |
No |