$ oc get nodes NAME LABELS STATUS node1.example.com kubernetes.io/hostname=node1.example.com Ready node2.example.com kubernetes.io/hostname=node2.example.com Ready
When you perform node management operations, the CLI interacts with node objects that are representations of actual node hosts. The master uses the information from node objects to validate nodes with health checks.
To list all nodes that are known to the master:
$ oc get nodes NAME LABELS STATUS node1.example.com kubernetes.io/hostname=node1.example.com Ready node2.example.com kubernetes.io/hostname=node2.example.com Ready
To only list information about a single node, replace <node>
with the full
node name:
$ oc get node <node>
The STATUS
column in the output of these commands can show nodes with the
following conditions:
Condition | Description |
---|---|
|
The node is passing the health checks performed from the master by returning
|
|
The node is not passing the health checks performed from the master. |
|
Pods cannot be scheduled for placement on the node. |
The STATUS column can also show Unknown for a node if the CLI cannot
find any node condition.
|
To get more detailed information about a specific node, including the reason for the current condition:
$ oc describe node <node>
For example:
$ oc describe node node1.example.com Name: node1.example.com Labels: kubernetes.io/hostname=node1.example.com CreationTimestamp: Wed, 10 Jun 2015 17:22:34 +0000 Conditions: Type Status LastHeartbeatTime LastTransitionTime Reason Message Ready True Wed, 10 Jun 2015 19:56:16 +0000 Wed, 10 Jun 2015 17:22:34 +0000 kubelet is posting ready status Addresses: 127.0.0.1 Capacity: memory: 1017552Ki pods: 100 cpu: 2 Version: Kernel Version: 3.17.4-301.fc21.x86_64 OS Image: Fedora 21 (Twenty One) Container Runtime Version: docker://1.6.0 Kubelet Version: v0.17.1-804-g496be63 Kube-Proxy Version: v0.17.1-804-g496be63 ExternalID: node1.example.com Pods: (2 in total) docker-registry-1-9yyw5 router-1-maytv No events.
To add nodes to your existing OpenShift cluster, you can run an Ansible playbook that handles installing the node components, generating the required certificates, and other important steps. See the advanced installation method for instructions on running the playbook directly.
Alternatively, if you used the quick installation method, you can re-run the installer to add nodes, which performs the same steps.
When you delete a node with the CLI, although the node object is deleted in Kubernetes, the pods that exist on the node itself are not deleted. However, the pods cannot be accessed by OpenShift. The behavior around deleting nodes and pods with the CLI is under active development.
To delete a node:
$ oc delete node <node>
To add or update labels on a node:
$ oc label node <node> <key_1>=<value_1> ... <key_n>=<value_n>
To see more detailed usage:
$ oc label -h
To list all or selected pods on one or more nodes:
$ oadm manage-node <node1> <node2> \ --list-pods [--pod-selector=<pod_selector>] [-o json|yaml]
To list all or selected pods on selected nodes:
$ oadm manage-node --selector=<node_selector> \ --list-pods [--pod-selector=<pod_selector>] [-o json|yaml]
By default, healthy nodes with a Ready
status are
marked as schedulable, meaning that new pods are allowed for placement on the
node. Manually marking a node as unschedulable blocks any new pods from being
scheduled on the node. Existing pods on the node are not affected.
To mark a node or nodes as unschedulable:
$ oadm manage-node <node1> <node2> --schedulable=false
For example:
$ oadm manage-node node1.example.com --schedulable=false NAME LABELS STATUS node1.example.com kubernetes.io/hostname=node1.example.com Ready,SchedulingDisabled
To mark a currently unschedulable node or nodes as schedulable:
$ oadm manage-node <node1> <node2> --schedulable
Alternatively, instead of specifying specific node names (e.g., <node1>
<node2>
), you can use the --selector=<node_selector>
option to mark
selected nodes as schedulable or unschedulable.
Evacuating pods allows you to migrate all or selected pods from a given node or nodes. Nodes must first be marked unschedulable to perform pod evacuation.
Only pods backed by a replication controller can be evacuated; the replication controllers create new pods on other nodes and remove the existing pods from the specified node(s). Bare pods, meaning those not backed by a replication controller, are unaffected by default. You can evacuate a subset of pods by specifying a pod-selector. Pod selector is based on labels, so all the pods with the specified label will be evacuated.
To list pods that will be migrated without actually performing the evacuation,
use the --dry-run
option:
$ oadm manage-node <node1> <node2> \ --evacuate --dry-run [--pod-selector=<pod_selector>]
To actually evacuate all or selected pods on one or more nodes:
$ oadm manage-node <node1> <node2> \ --evacuate [--pod-selector=<pod_selector>]
You can force deletion of bare pods by using the --force
option:
$ oadm manage-node <node1> <node2> \ --evacuate --force [--pod-selector=<pod_selector>]
Alternatively, instead of specifying specific node names (e.g., <node1>
<node2>
), you can use the --selector=<node_selector>
option to evacuate
pods on selected nodes.