This is a cache of https://docs.okd.io/4.19/hosted_control_planes/hcp-manage/hcp-manage-openstack.html. It is a snapshot of the page at 2025-07-05T20:56:57.975+0000.
Managing hosted control planes on OpenStack - Managing hosted control planes | Hosted control planes | OKD 4.19
×

Accessing the hosted cluster

You can access hosted clusters on OpenStack by extracting the kubeconfig secret directly from resources by using the oc CLI.

The hosted cluster (hosting) namespace contains hosted cluster resources and the access secrets. The hosted control plane namespace is where the hosted control plane runs.

The secret name formats are as follows:

  • kubeconfig secret: <hosted_cluster_namespace>-<name>-admin-kubeconfig. For example, clusters-hypershift-demo-admin-kubeconfig.

  • kubeadmin password secret: <hosted_cluster_namespace>-<name>-kubeadmin-password. For example, clusters-hypershift-demo-kubeadmin-password.

The kubeconfig secret contains a Base64-encoded kubeconfig field. The kubeadmin password secret is also Base64-encoded; you can extract it and then use the password to log in to the API server or console of the hosted cluster.

Prerequisites
  • The oc CLI is installed.

Procedure
  1. Extract the admin-kubeconfig secret by entering the following command:

    $ oc extract -n <hosted_cluster_namespace> \
      secret/<hosted_cluster_name>-admin-kubeconfig \
      --to=./hostedcluster-secrets --confirm
    Example output
    hostedcluster-secrets/kubeconfig
  2. View a list of nodes of the hosted cluster to verify your access by entering the following command:

    $ oc --kubeconfig ./hostedcluster-secrets/kubeconfig get nodes

Enabling node auto-scaling for the hosted cluster

When you need more capacity in your hosted cluster on OpenStack and spare agents are available, you can enable auto-scaling to install new worker nodes.

Procedure
  1. To enable auto-scaling, enter the following command:

    $ oc -n <hosted_cluster_namespace> patch nodepool <hosted_cluster_name> \
      --type=json \
      -p '[{"op": "remove", "path": "/spec/replicas"},{"op":"add", "path": "/spec/autoScaling", "value": { "max": 5, "min": 2 }}]'
  2. Create a workload that requires a new node.

    1. Create a YAML file that contains the workload configuration, by using the following example:

      apiVersion: apps/v1
      kind: Deployment
      metadata:
        labels:
          app: reversewords
        name: reversewords
        namespace: default
      spec:
        replicas: 40
        selector:
          matchLabels:
            app: reversewords
        template:
          metadata:
            labels:
              app: reversewords
          spec:
            containers:
            - image: quay.io/mavazque/reversewords:latest
              name: reversewords
              resources:
                requests:
                  memory: 2Gi
    2. Save the file with the name workload-config.yaml.

    3. Apply the YAML by entering the following command:

      $ oc apply -f workload-config.yaml
  3. Extract the admin-kubeconfig secret by entering the following command:

    $ oc extract -n <hosted_cluster_namespace> \
      secret/<hosted_cluster_name>-admin-kubeconfig \
      --to=./hostedcluster-secrets --confirm
    Example output
    hostedcluster-secrets/kubeconfig
  4. You can check if new nodes are in the Ready status by entering the following command:

    $ oc --kubeconfig ./hostedcluster-secrets get nodes
  5. To remove the node, delete the workload by entering the following command:

    $ oc --kubeconfig ./hostedcluster-secrets -n <namespace> \
      delete deployment <deployment_name>
  6. Wait for several minutes to pass without requiring the additional capacity. You can confirm that the node was removed by entering the following command:

    $ oc --kubeconfig ./hostedcluster-secrets get nodes

Configuring node pools for availability zones

You can distribute node pools across multiple OpenStack Nova availability zones to improve the high availability of your hosted cluster.

Availability zones do not necessarily correspond to fault domains and do not inherently provide high availability benefits.
Prerequisites
  • You created a hosted cluster.

  • You have access to the management cluster.

  • The hcp and oc CLIs are installed.

Procedure
  1. Set environment variables that are appropriate for your needs. For example, if you want to create two additional machines in the az1 availability zone, you could enter:

    $ export NODEPOOL_NAME="${CLUSTER_NAME}-az1" \
      && export WORKER_COUNT="2" \
      && export FLAVOR="m1.xlarge" \
      && export AZ="az1"
  2. Create the node pool by using your environment variables by entering the following command:

    $ hcp create nodepool openstack \
      --cluster-name <cluster_name> \
      --name $NODEPOOL_NAME \
      --replicas $WORKER_COUNT \
      --openstack-node-flavor $FLAVOR \
      --openstack-node-availability-zone $AZ

    where:

    <cluster_name>

    Specifies the name of your hosted cluster.

  3. Check the status of the node pool by listing nodepool resources in the clusters namespace by running the following command:

    $ oc get nodepools --namespace clusters
    Example output
    NAME                      CLUSTER         DESIRED NODES   CURRENT NODES   AUTOSCALING   AUTOREPAIR   VERSION   UPDATINGVERSION   UPDATINGCONFIG   MESSAGE
    example                   example         5               5               False         False        4.17.0
    example-az1               example         2                               False         False                  True              True             Minimum availability requires 2 replicas, current 0 available
  4. Observe the notes as they start on your hosted cluster by running the following command:

    $ oc --kubeconfig $CLUSTER_NAME-kubeconfig get nodes
    Example output
    NAME                      STATUS   ROLES    AGE     VERSION
    ...
    example-extra-az-zh9l5    Ready    worker   2m6s    v1.27.4+18eadca
    example-extra-az-zr8mj    Ready    worker   102s    v1.27.4+18eadca
    ...
  5. Verify that the node pool is created by running the following command:

    $ oc get nodepools --namespace clusters
    Example output
    NAME              CLUSTER         DESIRED   CURRENT   AVAILABLE   PROGRESSING   MESSAGE
    <node_pool_name>  <cluster_name>  2         2         2           False         All replicas are available

Configuring additional ports for node pools

You can configure additional ports for node pools to support advanced networking scenarios, such as SR-IOV or multiple networks.

Use cases for additional ports for node pools

Common reasons to configure additional ports for node pools include the following:

SR-IOV (Single Root I/O Virtualization)

Enables a physical network device to appear as multiple virtual functions (VFs). By attaching additional ports to node pools, workloads can use SR-IOV interfaces to achieve low-latency, high-performance networking.

DPDK (Data Plane Development Kit)

Provides fast packet processing in user space, bypassing the kernel. Node pools with additional ports can expose interfaces for workloads that use DPDK to improve network performance.

Manila RWX volumes on NFS

Supports ReadWriteMany (RWX) volumes over NFS, allowing multiple nodes to access shared storage. Attaching additional ports to node pools enables workloads to reach the NFS network used by Manila.

Multus CNI

Enables pods to connect to multiple network interfaces. Node pools with additional ports support use cases that require secondary network interfaces, including dual-stack connectivity and traffic separation.

Options for additional ports for node pools

You can use the --openstack-node-additional-port flag to attach additional ports to nodes in a hosted cluster on OpenStack. The flag takes a list of comma-separated parameters. Parameters can be used multiple times to attach multiple additional ports to the nodes.

The parameters are:

Parameter Description Required Default

network-id

The ID of the network to attach to the node.

Yes

N/A

vnic-type

The VNIC type to use for the port. If not specified, Neutron uses the default type normal.

No

N/A

disable-port-security

Whether to disable port security for the port. If not specified, Neutron enables port security unless it is explicitly disabled at the network level.

No

N/A

address-pairs

A list of IP address pairs to assign to the port. The format is ip_address=mac_address. Multiple pairs can be provided, separated by a hyphen (-). The mac_address portion is optional.

No

N/A

Creating additional ports for node pools

You can configure additional ports for node pools for hosted clusters that run on OpenStack.

Prerequisites
  • You created a hosted cluster.

  • You have access to the management cluster.

  • The hcp CLI is installed.

  • Additional networks are created in OpenStack.

  • The project that is used by the hosted cluster must have access to the additional networks.

  • You reviewed the options that are listed in "Options for additional ports for node pools".

Procedure
  • Create a hosted cluster with additional ports attached to it by running the hcp create nodepool openstack command with the --openstack-node-additional-port options. For example:

    $ hcp create nodepool openstack \
      --cluster-name <cluster_name> \
      --name <nodepool_name> \
      --replicas <replica_count> \
      --openstack-node-flavor <flavor> \
      --openstack-node-additional-port "network-id=<sriov_net_id>,vnic-type=direct,disable-port-security=true" \
      --openstack-node-additional-port "network-id=<lb_net_id>,address-pairs:192.168.0.1-192.168.0.2"

    where:

    <cluster_name>

    Specifies the name of the hosted cluster.

    <nodepool_name>

    Specifies the name of the node pool.

    <replica_count>

    Specifies the desired number of replicas.

    <flavor>

    Specifies the OpenStack flavor to use.

    <sriov_net_id>

    Specifies a SR-IOV network ID.

    <lb_net_id>

    Specifies a load balancer network ID.

Configuring additional ports for node pools

You can tune hosted cluster node performance on OpenStack for high-performance workloads, such as cloud-native network functions (CNFs). Performance tuning includes configuring OpenStack resources, creating a performance profile, deploying a tuned NodePool resource, and enabling SR-IOV device support.

CNFs are designed to run in cloud-native environments. They can provide network services such as routing, firewalling, and load balancing. You can configure the node pool to use high-performance computing and networking devices to run CNFs.

Tuning performance for hosted cluster nodes

Create a performance profile and deploy a tuned NodePool resource to run high-performance workloads on OpenStack hosted control planes.

Prerequisites
  • You have OpenStack flavor that has the necessary resources to run your workload, including dedicated CPU, memory, and host aggregate information.

  • You have an OpenStack network that is attached to SR-IOV or DPDK-capable NICs. The network must be available to the project used by hosted clusters.

Procedure
  1. Create a performance profile that meets your requirements in a file called perfprofile.yaml. For example:

    Example performance profile in a config map
    apiVersion: v1
    kind: ConfigMap
    metadata:
      name: perfprof-1
      namespace: clusters
    data:
      tuning: |
        apiVersion: v1
        kind: ConfigMap
        metadata:
          name: cnf-performanceprofile
          namespace: "${HYPERSHIFT_NAMESPACE}"
        data:
          tuning: |
            apiVersion: performance.openshift.io/v2
            kind: PerformanceProfile
            metadata:
              name: cnf-performanceprofile
            spec:
              additionalKernelArgs:
                - nmi_watchdog=0
                - audit=0
                - mce=off
                - processor.max_cstate=1
                - idle=poll
                - intel_idle.max_cstate=0
                - amd_iommu=on
              cpu:
                isolated: "${CPU_ISOLATED}"
                reserved: "${CPU_RESERVED}"
              hugepages:
                defaultHugepagesSize: "1G"
                pages:
                  - count: ${HUGEPAGES}
                    node: 0
                    size: 1G
              nodeSelector:
                node-role.kubernetes.io/worker: ''
              realTimeKernel:
                enabled: false
              globallyDisableIrqLoadBalancing: true
    If you do not already have environment variables set for the HyperShift Operator namespace, isolated and reserved CPUs, and huge pages count, create them before applying the performance profile.
  2. Apply the performance profile configuration by running the following command:

    $ oc apply -f perfprof.yaml
  3. If you do not already have a CLUSTER_NAME environment variable set for the name of your cluster, define it.

  4. Set a node pool name environment variable by running the following command:

    $ export NODEPOOL_NAME=$CLUSTER_NAME-cnf
  5. Set a flavor environment variable by running the following command:

    $ export FLAVOR="m1.xlarge.nfv"
  6. Create a node pool that uses the performance profile by running the following command:

    $ hcp create nodepool openstack \
      --cluster-name $CLUSTER_NAME \
      --name $NODEPOOL_NAME \
      --node-count 0 \
      --openstack-node-flavor $FLAVOR
  7. Patch the node pool to reference the PerformanceProfile resource by running the following command:

    $ oc patch nodepool -n ${HYPERSHIFT_NAMESPACE} ${CLUSTER_NAME} \
      -p '{"spec":{"tuningConfig":[{"name":"cnf-performanceprofile"}]}}' --type=merge
  8. Scale the node pool by running the following command:

    $ oc scale nodepool/$CLUSTER_NAME --namespace ${HYPERSHIFT_NAMESPACE} --replicas=1
  9. Wait for the nodes to be ready:

    1. Wait for the nodes to be ready by running the following command:

      $ oc wait --for=condition=UpdatingConfig=True nodepool \
        -n ${HYPERSHIFT_NAMESPACE} ${CLUSTER_NAME} \
        --timeout=5m
    2. Wait for the configuration update to finish by running the following command:

      $ oc wait --for=condition=UpdatingConfig=False nodepool \
        -n ${HYPERSHIFT_NAMESPACE} ${CLUSTER_NAME} \
        --timeout=30m
    3. Wait until all nodes are healthy by running the following command:

      $ oc wait --for=condition=AllNodesHealthy nodepool \
        -n ${HYPERSHIFT_NAMESPACE} ${CLUSTER_NAME} \
        --timeout=5m
You can make an SSH connection into the nodes or use the oc debug command to verify performance configurations.

Enabling the SR-IOV Network Operator in a hosted cluster

You can enable the SR-IOV Network Operator to manage SR-IOV-capable devices on nodes deployed by the NodePool resource. The operator runs in the hosted cluster and requires labeled worker nodes.

Procedure
  1. Generate a kubeconfig file for the hosted cluster by running the following command:

    $ hcp create kubeconfig --name $CLUSTER_NAME > $CLUSTER_NAME-kubeconfig
  2. Create a kubeconfig resource environment variable by running the following command:

    $ export KUBECONFIG=$CLUSTER_NAME-kubeconfig
  3. Label each worker node to indicate SR-IOV capability by running the following command:

    $ oc label node <worker_node_name> feature.node.kubernetes.io/network-sriov.capable=true

    where:

    <worker_node_name>

    Specifies the name of a worker node in the hosted cluster.

  4. Install the SR-IOV Network Operator in the hosted cluster by following the instructions in "Installing the SR-IOV Network Operator".

  5. After installation, configure SR-IOV workloads in the hosted cluster by using the same process as for a standalone cluster.