This is a cache of https://docs.openshift.com/container-platform/4.4/machine_management/creating-infrastructure-machinesets.html. It is a snapshot of the page at 2024-11-28T01:26:02.469+0000.
Creating infrastructure machine sets | Machine management | OpenShift Container Platform 4.4
×

You can create a machine set to host only infrastructure components. You apply specific Kubernetes labels to these machines and then update the infrastructure components to run on only those machines. These infrastructure nodes are not counted toward the total number of subscriptions that are required to run the environment.

Unlike earlier versions of OpenShift Container Platform, you cannot move the infrastructure components to the master machines. To move the components, you must create a new machine set.

OpenShift Container Platform infrastructure components

The following OpenShift Container Platform components are infrastructure components:

  • Kubernetes and OpenShift Container Platform control plane services that run on masters

  • The default router

  • The container image registry

  • The cluster metrics collection, or monitoring service

  • Cluster aggregated logging

  • Service brokers

Any node that runs any other container, pod, or component is a worker node that your subscription must cover.

Creating infrastructure machine sets for production environments

In a production deployment, deploy at least three machine sets to hold infrastructure components. Both the logging aggregation solution and the service mesh deploy Elasticsearch, and Elasticsearch requires three instances that are installed on different nodes. For high availability, deploy these nodes to different availability zones. Since you need different machine sets for each availability zone, create at least three machine sets.

Creating machine sets for different clouds

Use the sample machine set for your cloud.

Sample YAML for a machine set custom resource on AWS

This sample YAML defines a machine set that runs in the us-east-1a Amazon Web Services (AWS) zone and creates nodes that are labeled with node-role.kubernetes.io/<role>: ""

In this sample, <infrastructureID> is the infrastructure ID label that is based on the cluster ID that you set when you provisioned the cluster, and <role> is the node label to add.

apiVersion: machine.openshift.io/v1beta1
kind: MachineSet
metadata:
  labels:
    machine.openshift.io/cluster-api-cluster: <infrastructureID> (1)
  name: <infrastructureID>-<role>-<zone> (2)
  namespace: openshift-machine-api
spec:
  replicas: 1
  selector:
    matchLabels:
      machine.openshift.io/cluster-api-cluster: <infrastructureID> (1)
      machine.openshift.io/cluster-api-machineset: <infrastructureID>-<role>-<zone> (2)
  template:
    metadata:
      labels:
        machine.openshift.io/cluster-api-cluster: <infrastructureID> (1)
        machine.openshift.io/cluster-api-machine-role: <role> (3)
        machine.openshift.io/cluster-api-machine-type: <role> (3)
        machine.openshift.io/cluster-api-machineset: <infrastructureID>-<role>-<zone> (2)
    spec:
      metadata:
        labels:
          node-role.kubernetes.io/<role>: "" (3)
      providerSpec:
        value:
          ami:
            id: ami-046fe691f52a953f9 (4)
          apiVersion: awsproviderconfig.openshift.io/v1beta1
          blockDevices:
            - ebs:
                iops: 0
                volumeSize: 120
                volumeType: gp2
          credentialsSecret:
            name: aws-cloud-credentials
          deviceIndex: 0
          iamInstanceProfile:
            id: <infrastructureID>-worker-profile (1)
          instanceType: m4.large
          kind: AWSMachineProviderConfig
          placement:
            availabilityZone: us-east-1a
            region: us-east-1
          securityGroups:
            - filters:
                - name: tag:Name
                  values:
                    - <infrastructureID>-worker-sg (1)
          subnet:
            filters:
              - name: tag:Name
                values:
                  - <infrastructureID>-private-us-east-1a (1)
          tags:
            - name: kubernetes.io/cluster/<infrastructureID> (1)
              value: owned
          userDataSecret:
            name: worker-user-data
1 Specify the infrastructure ID that is based on the cluster ID that you set when you provisioned the cluster. If you have the OpenShift CLI installed, you can obtain the infrastructure ID by running the following command:
$ oc get -o jsonpath='{.status.infrastructureName}{"\n"}' infrastructure cluster
2 Specify the infrastructure ID, node label, and zone.
3 Specify the node label to add.
4 Specify a valid Red Hat Enterprise Linux CoreOS (RHCOS) AMI for your AWS zone for your OpenShift Container Platform nodes.

Sample YAML for a machine set custom resource on Azure

This sample YAML defines a machine set that runs in the 1 Microsoft Azure zone in the centralus region and creates nodes that are labeled with node-role.kubernetes.io/<role>: ""

In this sample, <infrastructureID> is the infrastructure ID label that is based on the cluster ID that you set when you provisioned the cluster, and <role> is the node label to add.

apiVersion: machine.openshift.io/v1beta1
kind: MachineSet
metadata:
  labels:
    machine.openshift.io/cluster-api-cluster: <infrastructureID> (1)
    machine.openshift.io/cluster-api-machine-role: <role> (2)
    machine.openshift.io/cluster-api-machine-type: <role> (2)
  name: <infrastructureID>-<role>-<region> (3)
  namespace: openshift-machine-api
spec:
  replicas: 1
  selector:
    matchLabels:
      machine.openshift.io/cluster-api-cluster: <infrastructureID> (1)
      machine.openshift.io/cluster-api-machineset: <infrastructureID>-<role>-<region> (3)
  template:
    metadata:
      creationTimestamp: null
      labels:
        machine.openshift.io/cluster-api-cluster: <infrastructureID> (1)
        machine.openshift.io/cluster-api-machine-role: <role> (2)
        machine.openshift.io/cluster-api-machine-type: <role> (2)
        machine.openshift.io/cluster-api-machineset: <infrastructureID>-<role>-<region> (3)
    spec:
      metadata:
        creationTimestamp: null
        labels:
          node-role.kubernetes.io/<role>: "" (2)
      providerSpec:
        value:
          apiVersion: azureproviderconfig.openshift.io/v1beta1
          credentialsSecret:
            name: azure-cloud-credentials
            namespace: openshift-machine-api
          image:
            offer: ""
            publisher: ""
            resourceID: /resourceGroups/<infrastructureID>-rg/providers/Microsoft.Compute/images/<infrastructureID>
            sku: ""
            version: ""
          internalLoadBalancer: ""
          kind: AzureMachineProviderSpec
          location: centralus
          managedIdentity: <infrastructureID>-identity (1)
          metadata:
            creationTimestamp: null
          natRule: null
          networkResourceGroup: ""
          osDisk:
            diskSizeGB: 128
            managedDisk:
              storageAccountType: Premium_LRS
            osType: Linux
          publicIP: false
          publicLoadBalancer: ""
          resourceGroup: <infrastructureID>-rg (1)
          sshPrivateKey: ""
          sshPublicKey: ""
          subnet: <infrastructureID>-<role>-subnet  (1) (2)
          userDataSecret:
            name: <role>-user-data (2)
          vmSize: Standard_D2s_v3
          vnet: <infrastructureID>-vnet (1)
          zone: "1" (4)
1 Specify the infrastructure ID that is based on the cluster ID that you set when you provisioned the cluster. If you have the OpenShift CLI installed, you can obtain the infrastructure ID by running the following command:
$ oc get -o jsonpath='{.status.infrastructureName}{"\n"}' infrastructure cluster
2 Specify the node label to add.
3 Specify the infrastructure ID, node label, and region.
4 Specify the zone within your region to place Machines on. Be sure that your region supports the zone that you specify.

Sample YAML for a machine set custom resource on GCP

This sample YAML defines a machine set that runs in Google Cloud Platform (GCP) and creates nodes that are labeled with node-role.kubernetes.io/<role>: ""

In this sample, <infrastructureID> is the infrastructure ID label that is based on the cluster ID that you set when you provisioned the cluster, and <role> is the node label to add.

apiVersion: machine.openshift.io/v1beta1
kind: MachineSet
metadata:
  labels:
    machine.openshift.io/cluster-api-cluster: <infrastructureID> (1)
  name: <infrastructureID>-w-a (1)
  namespace: openshift-machine-api
spec:
  replicas: 1
  selector:
    matchLabels:
      machine.openshift.io/cluster-api-cluster: <infrastructureID> (1)
      machine.openshift.io/cluster-api-machineset: <infrastructureID>-w-a (1)
  template:
    metadata:
      creationTimestamp: null
      labels:
        machine.openshift.io/cluster-api-cluster: <infrastructureID> (1)
        machine.openshift.io/cluster-api-machine-role: <role> (3)
        machine.openshift.io/cluster-api-machine-type: <role> (3)
        machine.openshift.io/cluster-api-machineset: <infrastructureID>-w-a (1)
    spec:
      metadata:
        labels:
          node-role.kubernetes.io/<role>: "" (3)
      providerSpec:
        value:
          apiVersion: gcpprovider.openshift.io/v1beta1
          canIPForward: false
          credentialsSecret:
            name: gcp-cloud-credentials
          deletionProtection: false
          disks:
          - autoDelete: true
            boot: true
            image: <infrastructureID>-rhcos-image (1)
            labels: null
            sizeGb: 128
            type: pd-ssd
          kind: GCPMachineProviderSpec
          machineType: n1-standard-4
          metadata:
            creationTimestamp: null
          networkInterfaces:
          - network: <infrastructureID>-network (1)
            subnetwork: <infrastructureID>-<role>-subnet (2)
          projectID: <project_name> (4)
          region: us-central1
          serviceAccounts:
          - email: <infrastructureID>-w@<project_name>.iam.gserviceaccount.com  (1) (4)
            scopes:
            - https://www.googleapis.com/auth/cloud-platform
          tags:
          - <infrastructureID>-<role> (2)
          userDataSecret:
            name: worker-user-data
          zone: us-central1-a
1 Specify the infrastructure ID that is based on the cluster ID that you set when you provisioned the cluster. If you have the OpenShift CLI installed, you can obtain the infrastructure ID by running the following command:
$ oc get -o jsonpath='{.status.infrastructureName}{"\n"}' infrastructure cluster
2 Specify the infrastructure ID and node label.
3 Specify the node label to add.
4 Specify the name of the GCP project that you use for your cluster.

Creating a machine set

In addition to the ones created by the installation program, you can create your own machine sets to dynamically manage the machine compute resources for specific workloads of your choice.

Prerequisites
  • Deploy an OpenShift Container Platform cluster.

  • Install the OpenShift CLI (oc).

  • Log in to oc as a user with cluster-admin permission.

Procedure
  1. Create a new YAML file that contains the machine set custom resource (CR) sample, as shown, and is named <file_name>.yaml.

    Ensure that you set the <clusterID> and <role> parameter values.

    1. If you are not sure about which value to set for a specific field, you can check an existing machine set from your cluster.

      $ oc get machinesets -n openshift-machine-api
      
      NAME                                DESIRED   CURRENT   READY   AVAILABLE   AGE
      agl030519-vplxk-worker-us-east-1a   1         1         1       1           55m
      agl030519-vplxk-worker-us-east-1b   1         1         1       1           55m
      agl030519-vplxk-worker-us-east-1c   1         1         1       1           55m
      agl030519-vplxk-worker-us-east-1d   0         0                             55m
      agl030519-vplxk-worker-us-east-1e   0         0                             55m
      agl030519-vplxk-worker-us-east-1f   0         0                             55m
    2. Check values of a specific machine set:

      $ oc get machineset <machineset_name> -n \
           openshift-machine-api -o yaml
      
      ....
      
      template:
          metadata:
            labels:
              machine.openshift.io/cluster-api-cluster: agl030519-vplxk (1)
              machine.openshift.io/cluster-api-machine-role: worker (2)
              machine.openshift.io/cluster-api-machine-type: worker
              machine.openshift.io/cluster-api-machineset: agl030519-vplxk-worker-us-east-1a
      1 The cluster ID.
      2 A default node label.
  2. Create the new MachineSet CR:

    $ oc create -f <file_name>.yaml
  3. View the list of machine sets:

    $ oc get machineset -n openshift-machine-api
    
    
    NAME                                DESIRED   CURRENT   READY   AVAILABLE   AGE
    agl030519-vplxk-infra-us-east-1a    1         1         1       1           11m
    agl030519-vplxk-worker-us-east-1a   1         1         1       1           55m
    agl030519-vplxk-worker-us-east-1b   1         1         1       1           55m
    agl030519-vplxk-worker-us-east-1c   1         1         1       1           55m
    agl030519-vplxk-worker-us-east-1d   0         0                             55m
    agl030519-vplxk-worker-us-east-1e   0         0                             55m
    agl030519-vplxk-worker-us-east-1f   0         0                             55m

    When the new machine set is available, the DESIRED and CURRENT values match. If the machine set is not available, wait a few minutes and run the command again.

  4. After the new machine set is available, check status of the machine and the node that it references:

    $ oc describe machine <name> -n openshift-machine-api

    For example:

    $ oc describe machine agl030519-vplxk-infra-us-east-1a -n openshift-machine-api
    
    status:
      addresses:
      - address: 10.0.133.18
        type: InternalIP
      - address: ""
        type: ExternalDNS
      - address: ip-10-0-133-18.ec2.internal
        type: InternalDNS
      lastUpdated: "2019-05-03T10:38:17Z"
      nodeRef:
        kind: Node
        name: ip-10-0-133-18.ec2.internal
        uid: 71fb8d75-6d8f-11e9-9ff3-0e3f103c7cd8
      providerStatus:
        apiVersion: awsproviderconfig.openshift.io/v1beta1
        conditions:
        - lastProbeTime: "2019-05-03T10:34:31Z"
          lastTransitionTime: "2019-05-03T10:34:31Z"
          message: machine successfully created
          reason: MachineCreationSucceeded
          status: "True"
          type: MachineCreation
        instanceId: i-09ca0701454124294
        instanceState: running
        kind: AWSMachineProviderStatus
  5. View the new node and confirm that the new node has the label that you specified:

    $ oc get node <node_name> --show-labels

    Review the command output and confirm that node-role.kubernetes.io/<your_label> is in the LABELS list.

Any change to a machine set is not applied to existing machines owned by the machine set. For example, labels edited or added to an existing machine set are not propagated to existing machines and nodes associated with the machine set.

Next steps

If you need machine sets in other availability zones, repeat this process to create more machine sets.

Assigning machine set resources to infrastructure nodes

After creating an infrastructure machine set, the worker and infra roles are applied to new infra nodes. Nodes with the infra role applied are not counted toward the total number of subscriptions that are required to run the environment, even when the worker role is also applied.

However, with an infra node being assigned as a worker, there is a chance user workloads could get inadvertently assigned to an infra node. To avoid this, you can apply a taint to the infra node and tolerations for the pods you want to control.

Binding infrastructure node workloads using taints and tolerations

If you have an infra node that has the infra and worker roles assigned, you must configure the node so that user workloads are not assigned to it.

Prerequisites
  • Configure additional MachineSet objects in your OpenShift Container Platform cluster.

Procedure
  1. Use the following command to add a taint to the infra node to prevent scheduling user workloads on it:

    $ oc adm taint nodes <node_name> <key>:<effect>

    For example:

    $ oc adm taint nodes node1 node-role.kubernetes.io/infra:NoSchedule

    This example places a taint on node1 that has key node-role.kubernetes.io/infra and taint effect NoSchedule. Nodes with the NoSchedule effect schedule only pods that tolerate the taint, but allow existing pods to remain scheduled on the node.

    If a descheduler is used, pods violating node taints could be evicted from the cluster.

  2. Add tolerations for the pod configurations you want to schedule on the infra node, like router, registry, and monitoring workloads. Add the following code to the Pod object specification:

    tolerations:
      - effect: NoSchedule (1)
        key: node-role.kubernetes.io/infra (2)
        operator: Exists (3)
    1 Specify the effect that you added to the node.
    2 Specify the key that you added to the node.
    3 Specify the Exists Operator to require a taint with the key node-role.kubernetes.io/infra to be present on the node.

    This toleration matches the taint created by the oc adm taint command. A pod with this toleration can be scheduled onto the infra node.

    Moving pods for an Operator installed via OLM to an infra node is not always possible. The capability to move Operator pods depends on the configuration of each Operator.

  3. Schedule the pod to the infra node using a scheduler. See the documentation for Controlling pod placement onto nodes for details.

Additional resources

Moving resources to infrastructure machine sets

Some of the infrastructure resources are deployed in your cluster by default. You can move them to the infrastructure machine sets that you created.

Moving the router

You can deploy the router pod to a different machine set. By default, the pod is deployed to a worker node.

Prerequisites
  • Configure additional machine sets in your OpenShift Container Platform cluster.

Procedure
  1. View the ingressController custom resource for the router Operator:

    $ oc get ingresscontroller default -n openshift-ingress-operator -o yaml

    The command output resembles the following text:

    apiVersion: operator.openshift.io/v1
    kind: ingressController
    metadata:
      creationTimestamp: 2019-04-18T12:35:39Z
      finalizers:
      - ingresscontroller.operator.openshift.io/finalizer-ingresscontroller
      generation: 1
      name: default
      namespace: openshift-ingress-operator
      resourceVersion: "11341"
      selfLink: /apis/operator.openshift.io/v1/namespaces/openshift-ingress-operator/ingresscontrollers/default
      uid: 79509e05-61d6-11e9-bc55-02ce4781844a
    spec: {}
    status:
      availableReplicas: 2
      conditions:
      - lastTransitionTime: 2019-04-18T12:36:15Z
        status: "True"
        type: Available
      domain: apps.<cluster>.example.com
      endpointPublishingStrategy:
        type: LoadBalancerService
      selector: ingresscontroller.operator.openshift.io/deployment-ingresscontroller=default
  2. Edit the ingresscontroller resource and change the nodeSelector to use the infra label:

    $ oc edit ingresscontroller default -n openshift-ingress-operator

    Add the nodeSelector stanza that references the infra label to the spec section, as shown:

      spec:
        nodePlacement:
          nodeSelector:
            matchLabels:
              node-role.kubernetes.io/infra: ""
  3. Confirm that the router pod is running on the infra node.

    1. View the list of router pods and note the node name of the running pod:

      $ oc get pod -n openshift-ingress -o wide
      
      NAME                              READY     STATUS        RESTARTS   AGE       IP           NODE                           NOMINATED NODE   READINESS GATES
      router-default-86798b4b5d-bdlvd   1/1      Running       0          28s       10.130.2.4   ip-10-0-217-226.ec2.internal   <none>           <none>
      router-default-955d875f4-255g8    0/1      Terminating   0          19h       10.129.2.4   ip-10-0-148-172.ec2.internal   <none>           <none>

      In this example, the running pod is on the ip-10-0-217-226.ec2.internal node.

    2. View the node status of the running pod:

      $ oc get node <node_name> (1)
      
      NAME                          STATUS  ROLES         AGE   VERSION
      ip-10-0-217-226.ec2.internal  Ready   infra,worker  17h   v1.17.1
      1 Specify the <node_name> that you obtained from the pod list.

      Because the role list includes infra, the pod is running on the correct node.

Moving the default registry

You configure the registry Operator to deploy its pods to different nodes.

Prerequisites
  • Configure additional machine sets in your OpenShift Container Platform cluster.

Procedure
  1. View the config/instance object:

    $ oc get config/cluster -o yaml

    The output resembles the following text:

    apiVersion: imageregistry.operator.openshift.io/v1
    kind: Config
    metadata:
      creationTimestamp: 2019-02-05T13:52:05Z
      finalizers:
      - imageregistry.operator.openshift.io/finalizer
      generation: 1
      name: cluster
      resourceVersion: "56174"
      selfLink: /apis/imageregistry.operator.openshift.io/v1/configs/cluster
      uid: 36fd3724-294d-11e9-a524-12ffeee2931b
    spec:
      httpSecret: d9a012ccd117b1e6616ceccb2c3bb66a5fed1b5e481623
      logging: 2
      managementState: Managed
      proxy: {}
      replicas: 1
      requests:
        read: {}
        write: {}
      storage:
        s3:
          bucket: image-registry-us-east-1-c92e88cad85b48ec8b312344dff03c82-392c
          region: us-east-1
    status:
    ...
  2. Edit the config/instance object:

    $ oc edit config/cluster
  3. Add the following lines of text the spec section of the object:

      nodeSelector:
        node-role.kubernetes.io/infra: ""
  4. Verify the registry pod has been moved to the infrastructure node.

    1. Run the following command to identify the node where the registry pod is located:

      $ oc get pods -o wide -n openshift-image-registry
    2. Confirm the node has the label you specified:

      $ oc describe node <node_name>

      Review the command output and confirm that node-role.kubernetes.io/infra is in the LABELS list.

Moving the monitoring solution

By default, the Prometheus Cluster Monitoring stack, which contains Prometheus, Grafana, and AlertManager, is deployed to provide cluster monitoring. It is managed by the Cluster Monitoring Operator. To move its components to different machines, you create and apply a custom config map.

Procedure
  1. Save the following ConfigMap definition as the cluster-monitoring-configmap.yaml file:

    apiVersion: v1
    kind: ConfigMap
    metadata:
      name: cluster-monitoring-config
      namespace: openshift-monitoring
    data:
      config.yaml: |+
        alertmanagerMain:
          nodeSelector:
            node-role.kubernetes.io/infra: ""
        prometheusK8s:
          nodeSelector:
            node-role.kubernetes.io/infra: ""
        prometheusOperator:
          nodeSelector:
            node-role.kubernetes.io/infra: ""
        grafana:
          nodeSelector:
            node-role.kubernetes.io/infra: ""
        k8sPrometheusAdapter:
          nodeSelector:
            node-role.kubernetes.io/infra: ""
        kubeStateMetrics:
          nodeSelector:
            node-role.kubernetes.io/infra: ""
        telemeterClient:
          nodeSelector:
            node-role.kubernetes.io/infra: ""
        openshiftStateMetrics:
          nodeSelector:
            node-role.kubernetes.io/infra: ""
        thanosQuerier:
          nodeSelector:
            node-role.kubernetes.io/infra: ""

    Running this config map forces the components of the monitoring stack to redeploy to infrastructure nodes.

  2. Apply the new config map:

    $ oc create -f cluster-monitoring-configmap.yaml
  3. Watch the monitoring pods move to the new machines:

    $ watch 'oc get pod -n openshift-monitoring -o wide'
  4. If a component has not moved to the infra node, delete the pod with this component:

    $ oc delete pod -n openshift-monitoring <pod>

    The component from the deleted pod is re-created on the infra node.

Additional resources

Moving the cluster logging resources

You can configure the Cluster Logging Operator to deploy the pods for any or all of the Cluster Logging components, Elasticsearch, Kibana, and Curator to different nodes. You cannot move the Cluster Logging Operator pod from its installed location.

For example, you can move the Elasticsearch pods to a separate node because of high CPU, memory, and disk requirements.

You should set your machine set to use at least 6 replicas.

Prerequisites
  • Cluster logging and Elasticsearch must be installed. These features are not installed by default.

Procedure
  1. Edit the ClusterLogging custom resource (CR) in the openshift-logging project:

    $ oc edit ClusterLogging instance
    apiVersion: logging.openshift.io/v1
    kind: ClusterLogging
    
    ....
    
    spec:
      collection:
        logs:
          fluentd:
            resources: null
          type: fluentd
      curation:
        curator:
          nodeSelector: (1)
              node-role.kubernetes.io/infra: ''
          resources: null
          schedule: 30 3 * * *
        type: curator
      logStore:
        elasticsearch:
          nodeCount: 3
          nodeSelector: (1)
              node-role.kubernetes.io/infra: ''
          redundancyPolicy: SingleRedundancy
          resources:
            limits:
              cpu: 500m
              memory: 16Gi
            requests:
              cpu: 500m
              memory: 16Gi
          storage: {}
        type: elasticsearch
      managementState: Managed
      visualization:
        kibana:
          nodeSelector: (1)
              node-role.kubernetes.io/infra: '' (1)
          proxy:
            resources: null
          replicas: 1
          resources: null
        type: kibana
    
    ....
1 Add a nodeSelector parameter with the appropriate value to the component you want to move. You can use a nodeSelector in the format shown or use <key>: <value> pairs, based on the value specified for the node.
Verification steps

To verify that a component has moved, you can use the oc get pod -o wide command.

For example:

  • You want to move the Kibana pod from the ip-10-0-147-79.us-east-2.compute.internal node:

    $ oc get pod kibana-5b8bdf44f9-ccpq9 -o wide
    NAME                      READY   STATUS    RESTARTS   AGE   IP            NODE                                        NOMINATED NODE   READINESS GATES
    kibana-5b8bdf44f9-ccpq9   2/2     Running   0          27s   10.129.2.18   ip-10-0-147-79.us-east-2.compute.internal   <none>           <none>
  • You want to move the Kibana Pod to the ip-10-0-139-48.us-east-2.compute.internal node, a dedicated infrastructure node:

    $ oc get nodes
    NAME                                         STATUS   ROLES          AGE   VERSION
    ip-10-0-133-216.us-east-2.compute.internal   Ready    master         60m   v1.17.1
    ip-10-0-139-146.us-east-2.compute.internal   Ready    master         60m   v1.17.1
    ip-10-0-139-192.us-east-2.compute.internal   Ready    worker         51m   v1.17.1
    ip-10-0-139-241.us-east-2.compute.internal   Ready    worker         51m   v1.17.1
    ip-10-0-147-79.us-east-2.compute.internal    Ready    worker         51m   v1.17.1
    ip-10-0-152-241.us-east-2.compute.internal   Ready    master         60m   v1.17.1
    ip-10-0-139-48.us-east-2.compute.internal    Ready    infra          51m   v1.17.1

    Note that the node has a node-role.kubernetes.io/infra: '' label:

    $ oc get node ip-10-0-139-48.us-east-2.compute.internal -o yaml
    
    kind: Node
    apiVersion: v1
    metadata:
      name: ip-10-0-139-48.us-east-2.compute.internal
      selfLink: /api/v1/nodes/ip-10-0-139-48.us-east-2.compute.internal
      uid: 62038aa9-661f-41d7-ba93-b5f1b6ef8751
      resourceVersion: '39083'
      creationTimestamp: '2020-04-13T19:07:55Z'
      labels:
        node-role.kubernetes.io/infra: ''
    ....
  • To move the Kibana pod, edit the ClusterLogging CR to add a node selector:

    apiVersion: logging.openshift.io/v1
    kind: ClusterLogging
    
    ....
    
    spec:
    
    ....
    
      visualization:
        kibana:
          nodeSelector: (1)
            node-role.kubernetes.io/infra: '' (1)
          proxy:
            resources: null
          replicas: 1
          resources: null
        type: kibana
    1 Add a node selector to match the label in the node specification.
  • After you save the CR, the current Kibana pod is terminated and new pod is deployed:

    $ oc get pods
    NAME                                            READY   STATUS        RESTARTS   AGE
    cluster-logging-operator-84d98649c4-zb9g7       1/1     Running       0          29m
    elasticsearch-cdm-hwv01pf7-1-56588f554f-kpmlg   2/2     Running       0          28m
    elasticsearch-cdm-hwv01pf7-2-84c877d75d-75wqj   2/2     Running       0          28m
    elasticsearch-cdm-hwv01pf7-3-f5d95b87b-4nx78    2/2     Running       0          28m
    fluentd-42dzz                                   1/1     Running       0          28m
    fluentd-d74rq                                   1/1     Running       0          28m
    fluentd-m5vr9                                   1/1     Running       0          28m
    fluentd-nkxl7                                   1/1     Running       0          28m
    fluentd-pdvqb                                   1/1     Running       0          28m
    fluentd-tflh6                                   1/1     Running       0          28m
    kibana-5b8bdf44f9-ccpq9                         2/2     Terminating   0          4m11s
    kibana-7d85dcffc8-bfpfp                         2/2     Running       0          33s
  • The new pod is on the ip-10-0-139-48.us-east-2.compute.internal node:

    $ oc get pod kibana-7d85dcffc8-bfpfp -o wide
    NAME                      READY   STATUS        RESTARTS   AGE   IP            NODE                                        NOMINATED NODE   READINESS GATES
    kibana-7d85dcffc8-bfpfp   2/2     Running       0          43s   10.131.0.22   ip-10-0-139-48.us-east-2.compute.internal   <none>           <none>
  • After a few moments, the original Kibana pod is removed.

    $ oc get pods
    NAME                                            READY   STATUS    RESTARTS   AGE
    cluster-logging-operator-84d98649c4-zb9g7       1/1     Running   0          30m
    elasticsearch-cdm-hwv01pf7-1-56588f554f-kpmlg   2/2     Running   0          29m
    elasticsearch-cdm-hwv01pf7-2-84c877d75d-75wqj   2/2     Running   0          29m
    elasticsearch-cdm-hwv01pf7-3-f5d95b87b-4nx78    2/2     Running   0          29m
    fluentd-42dzz                                   1/1     Running   0          29m
    fluentd-d74rq                                   1/1     Running   0          29m
    fluentd-m5vr9                                   1/1     Running   0          29m
    fluentd-nkxl7                                   1/1     Running   0          29m
    fluentd-pdvqb                                   1/1     Running   0          29m
    fluentd-tflh6                                   1/1     Running   0          29m
    kibana-7d85dcffc8-bfpfp                         2/2     Running   0          62s