This is a cache of https://docs.openshift.com/container-platform/4.17/hosted_control_planes/hcp-deploy/hcp-deploy-virt.html. It is a snapshot of the page at 2024-11-29T07:17:28.205+0000.
Deploying hosted control planes on OpenShift Virtualization - Deploying hosted control planes | Hosted control planes | OpenShift Container Platform 4.17
×

With hosted control planes and OpenShift Virtualization, you can create OpenShift Container Platform clusters with worker nodes that are hosted by KubeVirt virtual machines. Hosted control planes on OpenShift Virtualization provides several benefits:

  • Enhances resource usage by packing hosted control planes and hosted clusters in the same underlying bare metal infrastructure

  • Separates hosted control planes and hosted clusters to provide strong isolation

  • Reduces cluster provision time by eliminating the bare metal node bootstrapping process

  • Manages many releases under the same base OpenShift Container Platform cluster

The hosted control planes feature is enabled by default.

You can use the hosted control plane command line interface, hcp, to create an OpenShift Container Platform hosted cluster. The hosted cluster is automatically imported as a managed cluster. If you want to disable this automatic import feature, see "Disabling the automatic import of hosted clusters into multicluster engine operator".

Requirements to deploy hosted control planes on OpenShift Virtualization

As you prepare to deploy hosted control planes on OpenShift Virtualization, consider the following information:

  • Run the hub cluster and workers on the same platform for hosted control planes.

  • Each hosted cluster must have a cluster-wide unique name. A hosted cluster name cannot be the same as any existing managed cluster in order for multicluster engine Operator to manage it.

  • Do not use clusters as a hosted cluster name.

  • A hosted cluster cannot be created in the namespace of a multicluster engine Operator managed cluster.

  • When you configure storage for hosted control planes, consider the recommended etcd practices. To ensure that you meet the latency requirements, dedicate a fast storage device to all hosted control plane etcd instances that run on each control-plane node. You can use LVM storage to configure a local storage class for hosted etcd pods. For more information, see "Recommended etcd practices" and "Persistent storage using Logical Volume Manager storage".

Prerequisites

You must meet the following prerequisites to create an OpenShift Container Platform cluster on OpenShift Virtualization:

  • You have administrator access to an OpenShift Container Platform cluster, version 4.14 or later, specified in the KUBECONFIG environment variable.

  • The OpenShift Container Platform management cluster has wildcard DNS routes enabled, as shown in the following DNS:

    $ oc patch ingresscontroller -n openshift-ingress-operator default --type=json -p '[{ "op": "add", "path": "/spec/routeAdmission", "value": {wildcardPolicy: "WildcardsAllowed"}}]'
  • The OpenShift Container Platform management cluster has OpenShift Virtualization, version 4.14 or later, installed on it. For more information, see "Installing OpenShift Virtualization using the web console".

  • The OpenShift Container Platform management cluster is on-premise bare metal.

  • The OpenShift Container Platform management cluster is configured with OVNKubernetes as the default pod network CNI.

  • The OpenShift Container Platform management cluster has a default storage class. For more information, see "Postinstallation storage configuration". The following example shows how to set a default storage class:

    $ oc patch storageclass ocs-storagecluster-ceph-rbd -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'
  • You have a valid pull secret file for the quay.io/openshift-release-dev repository. For more information, see "Install OpenShift on any x86_64 platform with user-provisioned infrastructure".

  • You have installed the hosted control plane command line interface.

  • You have configured a load balancer. For more information, see "Configuring MetalLB".

  • For optimal network performance, you are using a network maximum transmission unit (MTU) of 9000 or greater on the OpenShift Container Platform cluster that hosts the KubeVirt virtual machines. If you use a lower MTU setting, network latency and the throughput of the hosted pods are affected. Enable multiqueue on node pools only when the MTU is 9000 or greater.

  • The multicluster engine Operator has at least one managed OpenShift Container Platform cluster. The local-cluster is automatically imported. For more information about the local-cluster, see "Advanced configuration" in the multicluster engine Operator documentation. You can check the status of your hub cluster by running the following command:

    $ oc get managedclusters local-cluster
  • On the OpenShift Container Platform cluster that hosts the OpenShift Virtualization virtual machines, you are using a ReadWriteMany (RWX) storage class so that live migration can be enabled.

Firewall and port requirements

Ensure that you meet the firewall and port requirements so that ports can communicate between the management cluster, the control plane, and hosted clusters:

  • The kube-apiserver service runs on port 6443 by default and requires ingress access for communication between the control plane components.

    • If you use the NodePort publishing strategy, ensure that the node port that is assigned to the kube-apiserver service is exposed.

    • If you use MetalLB load balancing, allow ingress access to the IP range that is used for load balancer IP addresses.

  • If you use the NodePort publishing strategy, use a firewall rule for the ignition-server and Oauth-server settings.

  • The konnectivity agent, which establishes a reverse tunnel to allow bi-directional communication on the hosted cluster, requires egress access to the cluster API server address on port 6443. With that egress access, the agent can reach the kube-apiserver service.

    • If the cluster API server address is an internal IP address, allow access from the workload subnets to the IP address on port 6443.

    • If the address is an external IP address, allow egress on port 6443 to that external IP address from the nodes.

  • If you change the default port of 6443, adjust the rules to reflect that change.

  • Ensure that you open any ports that are required by the workloads that run in the clusters.

  • Use firewall rules, security groups, or other access controls to restrict access to only required sources. Avoid exposing ports publicly unless necessary.

  • For production deployments, use a load balancer to simplify access through a single IP address.

Live migration for compute nodes

While the management cluster for hosted cluster virtual machines (VMs) is undergoing updates or maintenance, the hosted cluster VMs can be automatically live migrated to prevent disrupting hosted cluster workloads. As a result, the management cluster can be updated without affecting the availability and operation of the KubeVirt platform hosted clusters.

The live migration of KubeVirt VMs is enabled by default provided that the VMs use ReadWriteMany (RWX) storage for both the root volume and the storage classes that are mapped to the kubevirt-csi CSI provider.

You can verify that the VMs in a node pool are capable of live migration by checking the KubeVirtNodesLiveMigratable condition in the status section of a NodePool object.

In the following example, the VMs cannot be live migrated because RWX storage is not used.

Example configuration where VMs cannot be live migrated
    - lastTransitionTime: "2024-10-08T15:38:19Z"
      message: |
        3 of 3 machines are not live migratable
        Machine user-np-ngst4-gw2hz: DisksNotLiveMigratable: user-np-ngst4-gw2hz is not a live migratable machine: cannot migrate VMI: PVC user-np-ngst4-gw2hz-rhcos is not shared, live migration requires that all PVCs must be shared (using ReadWriteMany access mode)
        Machine user-np-ngst4-npq7x: DisksNotLiveMigratable: user-np-ngst4-npq7x is not a live migratable machine: cannot migrate VMI: PVC user-np-ngst4-npq7x-rhcos is not shared, live migration requires that all PVCs must be shared (using ReadWriteMany access mode)
        Machine user-np-ngst4-q5nkb: DisksNotLiveMigratable: user-np-ngst4-q5nkb is not a live migratable machine: cannot migrate VMI: PVC user-np-ngst4-q5nkb-rhcos is not shared, live migration requires that all PVCs must be shared (using ReadWriteMany access mode)
      observedGeneration: 1
      reason: DisksNotLiveMigratable
      status: "False"
      type: KubeVirtNodesLiveMigratable

In the next example, the VMs meet the requirements to be live migrated.

Example configuration where VMs can be live migrated
    - lastTransitionTime: "2024-10-08T15:38:19Z"
      message: "All is well"
      observedGeneration: 1
      reason: AsExpected
      status: "True"
      type: KubeVirtNodesLiveMigratable

While live migration can protect VMs from disruption in normal circumstances, events such as infrastructure node failure can result in a hard restart of any VMs that are hosted on the failed node. For live migration to be successful, the source node that a VM is hosted on must be working correctly.

When the VMs in a node pool cannot be live migrated, workload disruption might occur on the hosted cluster during maintenance on the management cluster. By default, the hosted control planes controllers try to drain the workloads that are hosted on KubeVirt VMs that cannot be live migrated before the VMs are stopped. Draining the hosted cluster nodes before stopping the VMs allows pod disruption budgets to protect workload availability within the hosted cluster.

Creating a hosted cluster with the KubeVirt platform

With OpenShift Container Platform 4.14 and later, you can create a cluster with KubeVirt, to include creating with an external infrastructure.

Creating a hosted cluster with the KubeVirt platform by using the CLI

To create a hosted cluster, you can use the hosted control plane command line interface, hcp.

Procedure
  1. Enter the following command:

    $ hcp create cluster kubevirt \
      --name <hosted-cluster-name> \ (1)
      --node-pool-replicas <worker-count> \ (2)
      --pull-secret <path-to-pull-secret> \ (3)
      --memory <value-for-memory> \ (4)
      --cores <value-for-cpu> \ (5)
      --etcd-storage-class=<etcd-storage-class> (6)
    
    1 Specify the name of your hosted cluster, for instance, example.
    2 Specify the worker count, for example, 2.
    3 Specify the path to your pull secret, for example, /user/name/pullsecret.
    4 Specify a value for memory, for example, 6Gi.
    5 Specify a value for CPU, for example, 2.
    6 Specify the etcd storage class name, for example, lvm-storageclass.

    You can use the --release-image flag to set up the hosted cluster with a specific OpenShift Container Platform release.

    A default node pool is created for the cluster with two virtual machine worker replicas according to the --node-pool-replicas flag.

  2. After a few moments, verify that the hosted control plane pods are running by entering the following command:

    $ oc -n clusters-<hosted-cluster-name> get pods
    Example output
    NAME                                                  READY   STATUS    RESTARTS   AGE
    capi-provider-5cc7b74f47-n5gkr                        1/1     Running   0          3m
    catalog-operator-5f799567b7-fd6jw                     2/2     Running   0          69s
    certified-operators-catalog-784b9899f9-mrp6p          1/1     Running   0          66s
    cluster-api-6bbc867966-l4dwl                          1/1     Running   0          66s
    .
    .
    .
    redhat-operators-catalog-9d5fd4d44-z8qqk              1/1     Running   0          66s

    A hosted cluster that has worker nodes that are backed by KubeVirt virtual machines typically takes 10-15 minutes to be fully provisioned.

  3. To check the status of the hosted cluster, see the corresponding HostedCluster resource by entering the following command:

    $ oc get --namespace clusters hostedclusters

    See the following example output, which illustrates a fully provisioned HostedCluster object:

    NAMESPACE   NAME      VERSION   KUBECONFIG                 PROGRESS    AVAILABLE   PROGRESSING   MESSAGE
    clusters    example   4.x.0     example-admin-kubeconfig   Completed   True        False         The hosted control plane is available

    Replace 4.x.0 with the supported OpenShift Container Platform version that you want to use.

Creating a hosted cluster with the KubeVirt platform by using external infrastructure

By default, the HyperShift Operator hosts both the control plane pods of the hosted cluster and the KubeVirt worker VMs within the same cluster. With the external infrastructure feature, you can place the worker node VMs on a separate cluster from the control plane pods.

  • The management cluster is the OpenShift Container Platform cluster that runs the HyperShift Operator and hosts the control plane pods for a hosted cluster.

  • The infrastructure cluster is the OpenShift Container Platform cluster that runs the KubeVirt worker VMs for a hosted cluster.

  • By default, the management cluster also acts as the infrastructure cluster that hosts VMs. However, for external infrastructure, the management and infrastructure clusters are different.

Prerequisites
  • You must have a namespace on the external infrastructure cluster for the KubeVirt nodes to be hosted in.

  • You must have a kubeconfig file for the external infrastructure cluster.

Procedure

You can create a hosted cluster by using the hcp command line interface.

  • To place the KubeVirt worker VMs on the infrastructure cluster, use the --infra-kubeconfig-file and --infra-namespace arguments, as shown in the following example:

    $ hcp create cluster kubevirt \
      --name <hosted-cluster-name> \ (1)
      --node-pool-replicas <worker-count> \ (2)
      --pull-secret <path-to-pull-secret> \ (3)
      --memory <value-for-memory> \ (4)
      --cores <value-for-cpu> \ (5)
      --infra-namespace=<hosted-cluster-namespace>-<hosted-cluster-name> \ (6)
      --infra-kubeconfig-file=<path-to-external-infra-kubeconfig> (7)
    
    1 Specify the name of your hosted cluster, for instance, example.
    2 Specify the worker count, for example, 2.
    3 Specify the path to your pull secret, for example, /user/name/pullsecret.
    4 Specify a value for memory, for example, 6Gi.
    5 Specify a value for CPU, for example, 2.
    6 Specify the infrastructure namespace, for example, clusters-example.
    7 Specify the path to your kubeconfig file for the infrastructure cluster, for example, /user/name/external-infra-kubeconfig.

    After you enter that command, the control plane pods are hosted on the management cluster that the HyperShift Operator runs on, and the KubeVirt VMs are hosted on a separate infrastructure cluster.

Creating a hosted cluster by using the console

To create a hosted cluster with the KubeVirt platform by using the console, complete the following steps.

Procedure
  1. Open the OpenShift Container Platform web console and log in by entering your administrator credentials.

  2. In the console header, ensure that All Clusters is selected.

  3. Click Infrastructure > Clusters.

  4. Click Create cluster > Red Hat OpenShift Virtualization > Hosted.

  5. On the Create cluster page, follow the prompts to enter details about the cluster and node pools.

    • If you want to use predefined values to automatically populate fields in the console, you can create a OpenShift Virtualization credential. For more information, see Creating a credential for an on-premises environment.

    • On the Cluster details page, the pull secret is your OpenShift Container Platform pull secret that you use to access OpenShift Container Platform resources. If you selected a OpenShift Virtualization credential, the pull secret is automatically populated.

  6. Review your entries and click Create.

    The Hosted cluster view is displayed.

  7. Monitor the deployment of the hosted cluster in the Hosted cluster view. If you do not see information about the hosted cluster, ensure that All Clusters is selected, and click the cluster name.

  8. Wait until the control plane components are ready. This process can take a few minutes.

  9. To view the node pool status, scroll to the NodePool section. The process to install the nodes takes about 10 minutes. You can also click Nodes to confirm whether the nodes joined the hosted cluster.

Additional resources

Configuring the default ingress and DNS for hosted control planes on OpenShift Virtualization

Every OpenShift Container Platform cluster includes a default application Ingress Controller, which must have an wildcard DNS record associated with it. By default, hosted clusters that are created by using the HyperShift KubeVirt provider automatically become a subdomain of the OpenShift Container Platform cluster that the KubeVirt virtual machines run on.

For example, your OpenShift Container Platform cluster might have the following default ingress DNS entry:

*.apps.mgmt-cluster.example.com

As a result, a KubeVirt hosted cluster that is named guest and that runs on that underlying OpenShift Container Platform cluster has the following default ingress:

*.apps.guest.apps.mgmt-cluster.example.com
Procedure

For the default ingress DNS to work properly, the cluster that hosts the KubeVirt virtual machines must allow wildcard DNS routes.

  • You can configure this behavior by entering the following command:

    $ oc patch ingresscontroller -n openshift-ingress-operator default --type=json -p '[{ "op": "add", "path": "/spec/routeAdmission", "value": {wildcardPolicy: "WildcardsAllowed"}}]'

When you use the default hosted cluster ingress, connectivity is limited to HTTPS traffic over port 443. Plain HTTP traffic over port 80 is rejected. This limitation applies to only the default ingress behavior.

Customizing ingress and DNS behavior

If you do not want to use the default ingress and DNS behavior, you can configure a KubeVirt hosted cluster with a unique base domain at creation time. This option requires manual configuration steps during creation and involves three main steps: cluster creation, load balancer creation, and wildcard DNS configuration.

Deploying a hosted cluster that specifies the base domain

To create a hosted cluster that specifies a base domain, complete the following steps.

Procedure
  1. Enter the following command:

    $ hcp create cluster kubevirt \
      --name <hosted_cluster_name> \ (1)
      --node-pool-replicas <worker_count> \ (2)
      --pull-secret <path_to_pull_secret> \ (3)
      --memory <value_for_memory> \ (4)
      --cores <value_for_cpu> \ (5)
      --base-domain <basedomain> (6)
    
    1 Specify the name of your hosted cluster.
    2 Specify the worker count, for example, 2.
    3 Specify the path to your pull secret, for example, /user/name/pullsecret.
    4 Specify a value for memory, for example, 6Gi.
    5 Specify a value for CPU, for example, 2.
    6 Specify the base domain, for example, hypershift.lab.

    As a result, the hosted cluster has an ingress wildcard that is configured for the cluster name and the base domain, for example, .apps.example.hypershift.lab. The hosted cluster remains in Partial status because after you create a hosted cluster with unique base domain, you must configure the required DNS records and load balancer.

  2. View the status of your hosted cluster by entering the following command:

    $ oc get --namespace clusters hostedclusters
    Example output
    NAME            VERSION   KUBECONFIG                       PROGRESS   AVAILABLE   PROGRESSING   MESSAGE
    example                   example-admin-kubeconfig         Partial    True        False         The hosted control plane is available
  3. Access the cluster by entering the following commands:

    $ hcp create kubeconfig --name <hosted_cluster_name> > <hosted_cluster_name>-kubeconfig
    $ oc --kubeconfig <hosted_cluster_name>-kubeconfig get co
    Example output
    NAME                                       VERSION   AVAILABLE   PROGRESSING   DEGRADED   SINCE   MESSAGE
    console                                    4.x.0     False       False         False      30m     routeHealthAvailable: failed to GET route (https://console-openshift-console.apps.example.hypershift.lab): Get "https://console-openshift-console.apps.example.hypershift.lab": dial tcp: lookup console-openshift-console.apps.example.hypershift.lab on 172.31.0.10:53: no such host
    ingress                                    4.x.0     True        False         True       28m     The "default" ingress controller reports Degraded=True: DegradedConditions: One or more other status conditions indicate a degraded state: CanaryChecksSucceeding=False (CanaryChecksRepetitiveFailures: Canary route checks for the default ingress controller are failing)

    Replace 4.x.0 with the supported OpenShift Container Platform version that you want to use.

Next steps

To fix the errors in the output, complete the steps in "Setting up the load balancer" and "Setting up a wildcard DNS".

If your hosted cluster is on bare metal, you might need MetalLB to set up load balancer services. For more information, see "Configuring MetalLB".

Setting up the load balancer

Set up the load balancer service that routes ingress traffic to the KubeVirt VMs and assigns a wildcard DNS entry to the load balancer IP address.

Procedure
  1. A NodePort service that exposes the hosted cluster ingress already exists. You can export the node ports and create the load balancer service that targets those ports.

    1. Get the HTTP node port by entering the following command:

      $ oc --kubeconfig <hosted_cluster_name>-kubeconfig get services -n openshift-ingress router-nodeport-default -o jsonpath='{.spec.ports[?(@.name=="http")].nodePort}'

      Note the HTTP node port value to use in the next step.

    2. Get the HTTPS node port by entering the following command:

      $ oc --kubeconfig <hosted_cluster_name>-kubeconfig get services -n openshift-ingress router-nodeport-default -o jsonpath='{.spec.ports[?(@.name=="https")].nodePort}'

      Note the HTTPS node port value to use in the next step.

  2. Create the load balancer service by entering the following command:

    oc apply -f -
    apiVersion: v1
    kind: Service
    metadata:
      labels:
        app: <hosted_cluster_name>
      name: <hosted_cluster_name>-apps
      namespace: clusters-<hosted_cluster_name>
    spec:
      ports:
      - name: https-443
        port: 443
        protocol: TCP
        targetPort: <https_node_port> (1)
      - name: http-80
        port: 80
        protocol: TCP
        targetPort: <http-node-port> (2)
      selector:
        kubevirt.io: virt-launcher
      type: LoadBalancer
    1 Specify the HTTPS node port value that you noted in the previous step.
    2 Specify the HTTP node port value that you noted in the previous step.

Setting up a wildcard DNS

Set up a wildcard DNS record or CNAME that references the external IP of the load balancer service.

Procedure
  1. Get the external IP address by entering the following command:

    $ oc -n clusters-<hosted_cluster_name> get service <hosted-cluster-name>-apps -o jsonpath='{.status.loadBalancer.ingress[0].ip}'
    Example output
    192.168.20.30
  2. Configure a wildcard DNS entry that references the external IP address. View the following example DNS entry:

    *.apps.<hosted_cluster_name\>.<base_domain\>.

    The DNS entry must be able to route inside and outside of the cluster.

    DNS resolutions example
    dig +short test.apps.example.hypershift.lab
    
    192.168.20.30
  3. Check that hosted cluster status has moved from Partial to Completed by entering the following command:

    $ oc get --namespace clusters hostedclusters
    Example output
    NAME            VERSION   KUBECONFIG                       PROGRESS    AVAILABLE   PROGRESSING   MESSAGE
    example         4.x.0     example-admin-kubeconfig         Completed   True        False         The hosted control plane is available

    Replace 4.x.0 with the supported OpenShift Container Platform version that you want to use.

Configuring MetalLB

You must install the MetalLB Operator before you configure MetalLB.

Procedure

Complete the following steps to configure MetalLB on your hosted cluster:

  1. Create a MetalLB resource by saving the following sample YAML content in the configure-metallb.yaml file:

    apiVersion: metallb.io/v1beta1
    kind: MetalLB
    metadata:
      name: metallb
      namespace: metallb-system
  2. Apply the YAML content by entering the following command:

    $ oc apply -f configure-metallb.yaml
    Example output
    metallb.metallb.io/metallb created
  3. Create a IPAddressPool resource by saving the following sample YAML content in the create-ip-address-pool.yaml file:

    apiVersion: metallb.io/v1beta1
    kind: IPAddressPool
    metadata:
      name: metallb
      namespace: metallb-system
    spec:
      addresses:
      - 192.168.216.32-192.168.216.122 (1)
    1 Create an address pool with an available range of IP addresses within the node network. Replace the IP address range with an unused pool of available IP addresses in your network.
  4. Apply the YAML content by entering the following command:

    $ oc apply -f create-ip-address-pool.yaml
    Example output
    ipaddresspool.metallb.io/metallb created
  5. Create a L2Advertisement resource by saving the following sample YAML content in the l2advertisement.yaml file:

    apiVersion: metallb.io/v1beta1
    kind: L2Advertisement
    metadata:
      name: l2advertisement
      namespace: metallb-system
    spec:
      ipAddressPools:
       - metallb
  6. Apply the YAML content by entering the following command:

    $ oc apply -f l2advertisement.yaml
    Example output
    l2advertisement.metallb.io/metallb created
Additional resources

Configuring additional networks, guaranteed CPUs, and VM scheduling for node pools

If you need to configure additional networks for node pools, request a guaranteed CPU access for Virtual Machines (VMs), or manage scheduling of KubeVirt VMs, see the following procedures.

Adding multiple networks to a node pool

By default, nodes generated by a node pool are attached to the pod network. You can attach additional networks to the nodes by using Multus and NetworkAttachmentDefinitions.

Procedure

To add multiple networks to nodes, use the --additional-network argument by running the following command:

$ hcp create cluster kubevirt \
  --name <hosted_cluster_name> \ (1)
  --node-pool-replicas <worker_node_count> \ (2)
  --pull-secret <path_to_pull_secret> \ (3)
  --memory <memory> \ (4)
  --cores <cpu> \ (5)
  --additional-network name:<namespace/name> \ (6)
  –-additional-network name:<namespace/name>
1 Specify the name of your hosted cluster, for instance, example.
2 Specify your worker node count, for example, 2.
3 Specify the path to your pull secret, for example, /user/name/pullsecret.
4 Specify the memory value, for example, 8Gi.
5 Specify the CPU value, for example, 2.
6 Set the value of the –additional-network argument to name:<namespace/name>. Replace <namespace/name> with a namespace and name of your NetworkAttachmentDefinitions.

Using an additional network as default

You can add your additional network as a default network for the nodes by disabling the default pod network.

Procedure
  • To add an additional network as default to your nodes, run the following command:

    $ hcp create cluster kubevirt \
      --name <hosted_cluster_name> \ (1)
      --node-pool-replicas <worker_node_count> \ (2)
      --pull-secret <path_to_pull_secret> \ (3)
      --memory <memory> \ (4)
      --cores <cpu> \ (5)
      --attach-default-network false \ (6)
      --additional-network name:<namespace>/<network_name> (7)
    1 Specify the name of your hosted cluster, for instance, example.
    2 Specify your worker node count, for example, 2.
    3 Specify the path to your pull secret, for example, /user/name/pullsecret.
    4 Specify the memory value, for example, 8Gi.
    5 Specify the CPU value, for example, 2.
    6 The --attach-default-network false argument disables the default pod network.
    7 Specify the additional network that you want to add to your nodes, for example, name:my-namespace/my-network.

Requesting guaranteed CPU resources

By default, KubeVirt VMs might share its CPUs with other workloads on a node. This might impact performance of a VM. To avoid the performance impact, you can request a guaranteed CPU access for VMs.

Procedure
  • To request guaranteed CPU resources, set the --qos-class argument to Guaranteed by running the following command:

    $ hcp create cluster kubevirt \
      --name <hosted_cluster_name> \ (1)
      --node-pool-replicas <worker_node_count> \ (2)
      --pull-secret <path_to_pull_secret> \ (3)
      --memory <memory> \ (4)
      --cores <cpu> \ (5)
      --qos-class Guaranteed (6)
    1 Specify the name of your hosted cluster, for instance, example.
    2 Specify your worker node count, for example, 2.
    3 Specify the path to your pull secret, for example, /user/name/pullsecret.
    4 Specify the memory value, for example, 8Gi.
    5 Specify the CPU value, for example, 2.
    6 The --qos-class Guaranteed argument guarantees that the specified number of CPU resources are assigned to VMs.

Scheduling KubeVirt VMs on a set of nodes

By default, KubeVirt VMs created by a node pool are scheduled to any available nodes. You can schedule KubeVirt VMs on a specific set of nodes that has enough capacity to run the VM.

Procedure
  • To schedule KubeVirt VMs within a node pool on a specific set of nodes, use the --vm-node-selector argument by running the following command:

    $ hcp create cluster kubevirt \
      --name <hosted_cluster_name> \ (1)
      --node-pool-replicas <worker_node_count> \ (2)
      --pull-secret <path_to_pull_secret> \ (3)
      --memory <memory> \ (4)
      --cores <cpu> \ (5)
      --vm-node-selector <label_key>=<label_value>,<label_key>=<label_value> (6)
    1 Specify the name of your hosted cluster, for instance, example.
    2 Specify your worker node count, for example, 2.
    3 Specify the path to your pull secret, for example, /user/name/pullsecret.
    4 Specify the memory value, for example, 8Gi.
    5 Specify the CPU value, for example, 2.
    6 The --vm-node-selector flag defines a specific set of nodes that contains the key-value pairs. Replace <label_key> and <label_value> with the key and value of your labels respectively.

Scaling a node pool

You can manually scale a node pool by using the oc scale command.

Procedure
  1. Run the following command:

    NODEPOOL_NAME=${CLUSTER_NAME}-work
    NODEPOOL_REPLICAS=5
    
    $ oc scale nodepool/$NODEPOOL_NAME --namespace clusters --replicas=$NODEPOOL_REPLICAS
  2. After a few moments, enter the following command to see the status of the node pool:

    $ oc --kubeconfig $CLUSTER_NAME-kubeconfig get nodes
    Example output
    NAME                  STATUS   ROLES    AGE     VERSION
    example-9jvnf         Ready    worker   97s     v1.27.4+18eadca
    example-n6prw         Ready    worker   116m    v1.27.4+18eadca
    example-nc6g4         Ready    worker   117m    v1.27.4+18eadca
    example-thp29         Ready    worker   4m17s   v1.27.4+18eadca
    example-twxns         Ready    worker   88s     v1.27.4+18eadca

Adding node pools

You can create node pools for a hosted cluster by specifying a name, number of replicas, and any additional information, such as memory and CPU requirements.

Procedure
  1. To create a node pool, enter the following information. In this example, the node pool has more CPUs assigned to the VMs:

    export NODEPOOL_NAME=${CLUSTER_NAME}-extra-cpu
    export WORKER_COUNT="2"
    export MEM="6Gi"
    export CPU="4"
    export DISK="16"
    
    $ hcp create nodepool kubevirt \
      --cluster-name $CLUSTER_NAME \
      --name $NODEPOOL_NAME \
      --node-count $WORKER_COUNT \
      --memory $MEM \
      --cores $CPU \
      --root-volume-size $DISK
  2. Check the status of the node pool by listing nodepool resources in the clusters namespace:

    $ oc get nodepools --namespace clusters
    Example output
    NAME                      CLUSTER         DESIRED NODES   CURRENT NODES   AUTOSCALING   AUTOREPAIR   VERSION   UPDATINGVERSION   UPDATINGCONFIG   MESSAGE
    example                   example         5               5               False         False        4.x.0
    example-extra-cpu         example         2                               False         False                  True              True             Minimum availability requires 2 replicas, current 0 available

    Replace 4.x.0 with the supported OpenShift Container Platform version that you want to use.

  3. After some time, you can check the status of the node pool by entering the following command:

    $ oc --kubeconfig $CLUSTER_NAME-kubeconfig get nodes
    Example output
    NAME                      STATUS   ROLES    AGE     VERSION
    example-9jvnf             Ready    worker   97s     v1.27.4+18eadca
    example-n6prw             Ready    worker   116m    v1.27.4+18eadca
    example-nc6g4             Ready    worker   117m    v1.27.4+18eadca
    example-thp29             Ready    worker   4m17s   v1.27.4+18eadca
    example-twxns             Ready    worker   88s     v1.27.4+18eadca
    example-extra-cpu-zh9l5   Ready    worker   2m6s    v1.27.4+18eadca
    example-extra-cpu-zr8mj   Ready    worker   102s    v1.27.4+18eadca
  4. Verify that the node pool is in the status that you expect by entering this command:

    $ oc get nodepools --namespace clusters
    Example output
    NAME                      CLUSTER         DESIRED NODES   CURRENT NODES   AUTOSCALING   AUTOREPAIR   VERSION   UPDATINGVERSION   UPDATINGCONFIG   MESSAGE
    example                   example         5               5               False         False        4.x.0
    example-extra-cpu         example         2               2               False         False        4.x.0

    Replace 4.x.0 with the supported OpenShift Container Platform version that you want to use.

Additional resources

Verifying hosted cluster creation on OpenShift Virtualization

To verify that your hosted cluster was successfully created, complete the following steps.

Procedure
  1. Verify that the HostedCluster resource transitioned to the completed state by entering the following command:

    $ oc get --namespace clusters hostedclusters <hosted_cluster_name>
    Example output
    NAMESPACE   NAME      VERSION   KUBECONFIG                 PROGRESS    AVAILABLE   PROGRESSING   MESSAGE
    clusters    example   4.12.2    example-admin-kubeconfig   Completed   True        False         The hosted control plane is available
  2. Verify that all the cluster operators in the hosted cluster are online by entering the following commands:

    $ hcp create kubeconfig --name <hosted_cluster_name> > <hosted_cluster_name>-kubeconfig
    $ oc get co --kubeconfig=<hosted_cluster_name>-kubeconfig
    Example output
    NAME                                       VERSION   AVAILABLE   PROGRESSING   DEGRADED   SINCE   MESSAGE
    console                                    4.12.2   True        False         False      2m38s
    csi-snapshot-controller                    4.12.2   True        False         False      4m3s
    dns                                        4.12.2   True        False         False      2m52s
    image-registry                             4.12.2   True        False         False      2m8s
    ingress                                    4.12.2   True        False         False      22m
    kube-apiserver                             4.12.2   True        False         False      23m
    kube-controller-manager                    4.12.2   True        False         False      23m
    kube-scheduler                             4.12.2   True        False         False      23m
    kube-storage-version-migrator              4.12.2   True        False         False      4m52s
    monitoring                                 4.12.2   True        False         False      69s
    network                                    4.12.2   True        False         False      4m3s
    node-tuning                                4.12.2   True        False         False      2m22s
    openshift-apiserver                        4.12.2   True        False         False      23m
    openshift-controller-manager               4.12.2   True        False         False      23m
    openshift-samples                          4.12.2   True        False         False      2m15s
    operator-lifecycle-manager                 4.12.2   True        False         False      22m
    operator-lifecycle-manager-catalog         4.12.2   True        False         False      23m
    operator-lifecycle-manager-packageserver   4.12.2   True        False         False      23m
    service-ca                                 4.12.2   True        False         False      4m41s
    storage                                    4.12.2   True        False         False      4m43s