This is a cache of https://docs.openshift.com/enterprise/3.2/admin_guide/manage_nodes.html. It is a snapshot of the page at 2024-11-05T04:24:12.067+0000.
Managing Nodes | Cluster Administration | OpenShift Enterprise 3.2
×

Overview

You can manage nodes in your instance using the CLI.

When you perform node management operations, the CLI interacts with node objects that are representations of actual node hosts. The master uses the information from node objects to validate nodes with health checks.

Listing Nodes

To list all nodes that are known to the master:

$ oc get nodes
NAME                 LABELS                                        STATUS
node1.example.com    kubernetes.io/hostname=node1.example.com      Ready
node2.example.com    kubernetes.io/hostname=node2.example.com      Ready

To only list information about a single node, replace <node> with the full node name:

$ oc get node <node>

The STATUS column in the output of these commands can show nodes with the following conditions:

Table 1. Node Conditions
Condition Description

Ready

The node is passing the health checks performed from the master by returning StatusOK.

NotReady

The node is not passing the health checks performed from the master.

SchedulingDisabled

Pods cannot be scheduled for placement on the node.

The STATUS column can also show Unknown for a node if the CLI cannot find any node condition.

To get more detailed information about a specific node, including the reason for the current condition:

$ oc describe node <node>

For example:

$ oc describe node node1.example.com
Name:			node1.example.com
Labels:			kubernetes.io/hostname=node1.example.com
CreationTimestamp:	Wed, 10 Jun 2015 17:22:34 +0000
Conditions:
  Type		Status	LastHeartbeatTime			LastTransitionTime			Reason					Message
  Ready 	True 	Wed, 10 Jun 2015 19:56:16 +0000 	Wed, 10 Jun 2015 17:22:34 +0000 	kubelet is posting ready status
Addresses:	127.0.0.1
Capacity:
 memory:	1017552Ki
 pods:		100
 cpu:		2
Version:
 Kernel Version:		3.17.4-301.fc21.x86_64
 OS Image:			Fedora 21 (Twenty One)
 Container Runtime Version:	docker://1.6.0
 Kubelet Version:		v0.17.1-804-g496be63
 Kube-Proxy Version:		v0.17.1-804-g496be63
ExternalID:			node1.example.com
Pods:				(2 in total)
  docker-registry-1-9yyw5
  router-1-maytv
No events.

Adding Nodes

To add nodes to your existing OpenShift Enterprise cluster, you can run an Ansible playbook that handles installing the node components, generating the required certificates, and other important steps. See the advanced installation method for instructions on running the playbook directly.

Alternatively, if you used the quick installation method, you can re-run the installer to add nodes, which performs the same steps.

Deleting Nodes

When you delete a node using the CLI, the node object is deleted in Kubernetes, but the pods that exist on the node itself are not deleted. Any bare pods not backed by a replication controller would be inaccessible to OpenShift Enterprise, pods backed by replication controllers would be rescheduled to other available nodes, and local manifest pods would need to be manually deleted.

To delete a node from the OpenShift Enterprise cluster:

  1. Evacuate pods from the node you are preparing to delete.

  2. Delete the node object:

    $ oc delete node <node>
  3. Check that the node has been removed from the node list:

    $ oc get nodes

    Pods should now be only scheduled for the remaining nodes that are in Ready state.

  4. If you want to uninstall all OpenShift Enterprise content from the node host, including all pods and containers, continue to Uninstalling Nodes and follow the procedure using the uninstall.yml playbook. The procedure assumes general understanding of the advanced installation method using Ansible.

Updating Labels on Nodes

To add or update labels on a node:

$ oc label node <node> <key_1>=<value_1> ... <key_n>=<value_n>

To see more detailed usage:

$ oc label -h

Listing Pods on Nodes

To list all or selected pods on one or more nodes:

$ oadm manage-node <node1> <node2> \
    --list-pods [--pod-selector=<pod_selector>] [-o json|yaml]

To list all or selected pods on selected nodes:

$ oadm manage-node --selector=<node_selector> \
    --list-pods [--pod-selector=<pod_selector>] [-o json|yaml]

Marking Nodes as Unschedulable or Schedulable

By default, healthy nodes with a Ready status are marked as schedulable, meaning that new pods are allowed for placement on the node. Manually marking a node as unschedulable blocks any new pods from being scheduled on the node. Existing pods on the node are not affected.

To mark a node or nodes as unschedulable:

$ oadm manage-node <node1> <node2> --schedulable=false

For example:

$ oadm manage-node node1.example.com --schedulable=false
NAME                 LABELS                                        STATUS
node1.example.com    kubernetes.io/hostname=node1.example.com      Ready,SchedulingDisabled

To mark a currently unschedulable node or nodes as schedulable:

$ oadm manage-node <node1> <node2> --schedulable

Alternatively, instead of specifying specific node names (e.g., <node1> <node2>), you can use the --selector=<node_selector> option to mark selected nodes as schedulable or unschedulable.

Evacuating Pods on Nodes

Evacuating pods allows you to migrate all or selected pods from a given node or nodes. Nodes must first be marked unschedulable to perform pod evacuation.

Only pods backed by a replication controller can be evacuated; the replication controllers create new pods on other nodes and remove the existing pods from the specified node(s). Bare pods, meaning those not backed by a replication controller, are unaffected by default. You can evacuate a subset of pods by specifying a pod-selector. Pod selector is based on labels, so all the pods with the specified label will be evacuated.

To list pods that will be migrated without actually performing the evacuation, use the --dry-run option:

$ oadm manage-node <node1> <node2> \
    --evacuate --dry-run [--pod-selector=<pod_selector>]

To actually evacuate all or selected pods on one or more nodes:

$ oadm manage-node <node1> <node2> \
    --evacuate [--pod-selector=<pod_selector>]

You can force deletion of bare pods by using the --force option:

$ oadm manage-node <node1> <node2> \
    --evacuate --force [--pod-selector=<pod_selector>]

Alternatively, instead of specifying specific node names (e.g., <node1> <node2>), you can use the --selector=<node_selector> option to evacuate pods on selected nodes.

Configuring Node Resources

You can configure node resources by adding kubelet arguments to the node configuration file (/etc/origin/node/node-config.yaml). Add the kubeletArguments section and include any desired options:

kubeletArguments:
  max-pods: (1)
    - "110"
  resolv-conf: (2)
    - "/etc/resolv.conf"
  image-gc-high-threshold: (3)
    - "90"
  image-gc-low-threshold: (4)
    - "80"
1 Number of pods that can run on this kubelet.
2 Resolver configuration file used as the basis for the container DNS resolution configuration.
3 The percent of disk usage after which image garbage collection is always run. Default: 90%
4 The percent of disk usage before which image garbage collection is never run. Lowest disk usage to garbage collect to. Default: 80%

To view all available kubelet options:

$ kubelet -h

This can also be set during an advanced installation using the openshift_node_kubelet_args variable. For example:

openshift_node_kubelet_args={'max-pods': ['110'], 'resolv-conf': ['/etc/resolv.conf'],  'image-gc-high-threshold': ['90'], 'image-gc-low-threshold': ['80']}

Changing Node Traffic Interface

By default, DNS routes all node traffic. During node registration, the master receives the node IP addresses from the DNS configuration, and therefore accessing nodes via DNS is the most flexible solution for most deployments.

If your deployment is using a cloud provider, then the node gets the IP information from the cloud provider. However, openshift-sdn attempts to determine the IP through a variety of methods, including a DNS lookup on the nodeName (if set), or on the system hostname (if nodeName is not set).

However, you may need to change the node traffic interface. For example, where:

  • OpenShift Enterprise is installed in a cloud provider where internal hostnames are not configured/resolvable by all hosts.

  • The node’s IP from the master’s perspective is not the same as the node’s IP from its own perspective.

Configuring the openshift_set_node_ip Ansible variable forces node traffic through an interface other than the default network interface.

To change the node traffic interface:

  1. Set the openshift_set_node_ip Ansible variable to true.

  2. Set the openshift_ip to the IP address for the node you want to configure.

Although openshift_set_node_ip can be useful as a workaround for the cases stated in this section, it is generally not suited for production environments. This is because the node will no longer function properly if it receives a new IP address.