$ oc get nodes
When you perform node management operations, the CLI interacts with node objects that are representations of actual node hosts. The master uses the information from node objects to validate nodes with health checks.
To list all nodes that are known to the master:
$ oc get nodes
NAME STATUS ROLES AGE VERSION
master.example.com Ready master 7h v1.9.1+a0ce1bc657
node1.example.com Ready compute 7h v1.9.1+a0ce1bc657
node2.example.com Ready compute 7h v1.9.1+a0ce1bc657
To list all nodes with information on a project’s pod deployment with node information:
$ oc get nodes -o wide
NAME STATUS ROLES AGE VERSION EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
ip-172-18-0-39.ec2.internal Ready infra 1d v1.10.0+b81c8f8 54.172.185.130 Red Hat Enterprise Linux Server 7.5 (Maipo) 3.10.0-862.el7.x86_64 docker://1.13.1
ip-172-18-10-95.ec2.internal Ready master 1d v1.10.0+b81c8f8 54.88.22.81 Red Hat Enterprise Linux Server 7.5 (Maipo) 3.10.0-862.el7.x86_64 docker://1.13.1
ip-172-18-8-35.ec2.internal Ready compute 1d v1.10.0+b81c8f8 34.230.50.57 Red Hat Enterprise Linux Server 7.5 (Maipo) 3.10.0-862.el7.x86_64 docker://1.13.1
To list only information about a single node, replace <node>
with the full
node name:
$ oc get node <node>
The STATUS
column in the output of these commands can show nodes with the
following conditions:
Condition | Description |
---|---|
|
The node is passing the health checks performed from the master by returning
|
|
The node is not passing the health checks performed from the master. |
|
Pods cannot be scheduled for placement on the node. |
The STATUS column can also show Unknown for a node if the CLI cannot
find any node condition.
|
To get more detailed information about a specific node, including the reason for the current condition:
$ oc describe node <node>
For example:
$ oc describe node node1.example.com
Name: node1.example.com (1)
Roles: compute (2)
Labels: beta.kubernetes.io/arch=amd64 (3)
beta.kubernetes.io/os=linux
kubernetes.io/hostname=m01.example.com
node-role.kubernetes.io/compute=true
node-role.kubernetes.io/infra=true
node-role.kubernetes.io/master=true
zone=default
Annotations: volumes.kubernetes.io/controller-managed-attach-detach=true (4)
CreationTimestamp: Thu, 24 May 2018 11:46:56 -0400
Taints: <none> (5)
Unschedulable: false
Conditions: (6)
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
OutOfDisk False Tue, 17 Jul 2018 11:47:30 -0400 Tue, 10 Jul 2018 15:45:16 -0400 KubeletHasSufficientDisk kubelet has sufficient disk space available
MemoryPressure False Tue, 17 Jul 2018 11:47:30 -0400 Tue, 10 Jul 2018 15:45:16 -0400 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Tue, 17 Jul 2018 11:47:30 -0400 Tue, 10 Jul 2018 16:03:54 -0400 KubeletHasNoDiskPressure kubelet has no disk pressure
Ready True Tue, 17 Jul 2018 11:47:30 -0400 Mon, 16 Jul 2018 15:10:25 -0400 KubeletReady kubelet is posting ready status
PIDPressure False Tue, 17 Jul 2018 11:47:30 -0400 Thu, 05 Jul 2018 10:06:51 -0400 KubeletHasSufficientPID kubelet has sufficient PID available
Addresses: (7)
InternalIP: 192.168.122.248
Hostname: node1.example.com
Capacity: (8)
cpu: 2
hugepages-2Mi: 0
memory: 8010336Ki
pods: 40
Allocatable:
cpu: 2
hugepages-2Mi: 0
memory: 7907936Ki
pods: 40
System Info: (9)
Machine ID: b3adb9acbc49fc1f9a7d6
System UUID: B3ADB9A-B0CB-C49FC1F9A7D6
Boot ID: 9359d15aec9-81a20aef5876
Kernel Version: 3.10.0-693.21.1.el7.x86_64
OS Image: OpenShift Enterprise
Operating System: linux
Architecture: amd64
Container Runtime Version: docker://1.13.1
Kubelet Version: v1.10.0+b81c8f8
Kube-Proxy Version: v1.10.0+b81c8f8
ExternalID: node1.example.com
Non-terminated Pods: (14 in total) (10)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits
--------- ---- ------------ ---------- --------------- -------------
default docker-registry-2-w252l 100m (5%) 0 (0%) 256Mi (3%) 0 (0%)
default registry-console-2-dpnc9 0 (0%) 0 (0%) 0 (0%) 0 (0%)
default router-2-5snb2 100m (5%) 0 (0%) 256Mi (3%) 0 (0%)
kube-service-catalog apiserver-jh6gt 0 (0%) 0 (0%) 0 (0%) 0 (0%)
kube-service-catalog controller-manager-z4t5j 0 (0%) 0 (0%) 0 (0%) 0 (0%)
kube-system master-api-m01.example.com 0 (0%) 0 (0%) 0 (0%) 0 (0%)
kube-system master-controllers-m01.example.com 0 (0%) 0 (0%) 0 (0%) 0 (0%)
kube-system master-etcd-m01.example.com 0 (0%) 0 (0%) 0 (0%) 0 (0%)
openshift-ansible-service-broker asb-1-hnn5t 0 (0%) 0 (0%) 0 (0%) 0 (0%)
openshift-node sync-dvhvs 0 (0%) 0 (0%) 0 (0%) 0 (0%)
openshift-sdn ovs-zjs5k 100m (5%) 200m (10%) 300Mi (3%) 400Mi (5%)
openshift-sdn sdn-zr4cb 100m (5%) 0 (0%) 200Mi (2%) 0 (0%)
openshift-template-service-broker apiserver-s9n7t 0 (0%) 0 (0%) 0 (0%) 0 (0%)
openshift-web-console webconsole-785689b664-q7s9j 100m (5%) 0 (0%) 100Mi (1%) 0 (0%)
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
CPU Requests CPU Limits Memory Requests Memory Limits
------------ ---------- --------------- -------------
500m (25%) 200m (10%) 1112Mi (14%) 400Mi (5%)
Events: (11)
Type Reason Age From Message
---- ------ ---- ---- -------
Normal NodeHasSufficientPID 6d (x5 over 6d) kubelet, m01.example.com Node m01.example.com status is now: NodeHasSufficientPID
Normal NodeAllocatableEnforced 6d kubelet, m01.example.com Updated Node Allocatable limit across pods
Normal NodeHasSufficientMemory 6d (x6 over 6d) kubelet, m01.example.com Node m01.example.com status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 6d (x6 over 6d) kubelet, m01.example.com Node m01.example.com status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientDisk 6d (x6 over 6d) kubelet, m01.example.com Node m01.example.com status is now: NodeHasSufficientDisk
Normal NodeHasSufficientPID 6d kubelet, m01.example.com Node m01.example.com status is now: NodeHasSufficientPID
Normal Starting 6d kubelet, m01.example.com Starting kubelet.
...
1 | The name of the node. |
2 | The role of the node, either master , compute , or infra . |
3 | The labels applied to the node. |
4 | The annotations applied to the node. |
5 | The taints applied to the node. |
6 | Node conditions. |
7 | The IP address and host name of the node. |
8 | The pod resources and allocatable resources. |
9 | Information about the node host. |
10 | The pods on the node. |
11 | The events reported by the node. |
You can display usage statistics about nodes, which provide the runtime environments for containers. These usage statistics include CPU, memory, and storage consumption.
To view the usage statistics:
$ oc adm top nodes
NAME CPU(cores) CPU% MEMORY(bytes) MEMORY%
node-1 297m 29% 4263Mi 55%
node-0 55m 5% 1201Mi 15%
infra-1 85m 8% 1319Mi 17%
infra-0 182m 18% 2524Mi 32%
master-0 178m 8% 2584Mi 16%
To view the usage statistics for nodes with labels:
$ oc adm top node --selector=''
You must choose the selector (label query) to filter on. Supports =
, ==
, and !=
.
You must have |
The |
You can add new hosts to your cluster by running the scaleup.yml playbook. This playbook queries the master, generates and distributes new certificates for the new hosts, and then runs the configuration playbooks on only the new hosts. Before running the scaleup.yml playbook, complete all prerequisite host preparation steps.
The scaleup.yml playbook configures only the new host. It does not update NO_PROXY in master services, and it does not restart master services. |
You must have an existing inventory file, for example /etc/ansible/hosts,
that is representative of your current cluster configuration in order to run the
scaleup.yml playbook.
If you previously used the atomic-openshift-installer
command to run your
installation, you can check ~/.config/openshift/hosts for the last inventory
file that the installer generated and use that file as your inventory file. You
can modify this file as required. You must then specify the file location with
-i
when you run the ansible-playbook
.
See the cluster maximums section for the recommended maximum number of nodes. |
Ensure you have the latest playbooks by updating the openshift-ansible package:
# yum update openshift-ansible
Edit your /etc/ansible/hosts file and add new_<host_type> to the [OSEv3:children] section. For example, to add a new node host, add new_nodes:
[OSEv3:children] masters nodes new_nodes
To add new master hosts, add new_masters.
Create a [new_<host_type>] section to specify host information for the new hosts. Format this section like an existing section, as shown in the following example of adding a new node:
[nodes] master[1:3].example.com node1.example.com openshift_node_group_name='node-config-compute' node2.example.com openshift_node_group_name='node-config-compute' infra-node1.example.com openshift_node_group_name='node-config-infra' infra-node2.example.com openshift_node_group_name='node-config-infra' [new_nodes] node3.example.com openshift_node_group_name='node-config-infra'
See Configuring Host Variables for more options.
When adding new masters, add hosts to both the [new_masters] section and the [new_nodes] section to ensure that the new master host is part of the OpenShift SDN:
[masters] master[1:2].example.com [new_masters] master3.example.com [nodes] master[1:2].example.com node1.example.com openshift_node_group_name='node-config-compute' node2.example.com openshift_node_group_name='node-config-compute' infra-node1.example.com openshift_node_group_name='node-config-infra' infra-node2.example.com openshift_node_group_name='node-config-infra' [new_nodes] master3.example.com
If you label a master host with the |
Change to the playbook directory and run the openshift_node_group.yml
playbook. If your inventory file is located somewhere other than the default of
/etc/ansible/hosts, specify the location with the -i
option:
$ cd /usr/share/ansible/openshift-ansible
$ ansible-playbook [-i /path/to/file] \
playbooks/openshift-master/openshift_node_group.yml
This creates the configmap for the new node groups, and ultimately, the configuration file of the node on the host.
Running the openshift_node_group.yaml playbook only updates new nodes. It cannot be run to update existing nodes in a cluster. |
Run the scaleup.yml playbook.
If your inventory file is located somewhere
other than the default of /etc/ansible/hosts, specify the location with the
-i
option.
For additional nodes:
$ ansible-playbook [-i /path/to/file] \
playbooks/openshift-node/scaleup.yml
For additional masters:
$ ansible-playbook [-i /path/to/file] \
playbooks/openshift-master/scaleup.yml
Set the node label to logging-infra-fluentd=true
, if you deployed the EFK stack in your cluster:
# oc label node/new-node.example.com logging-infra-fluentd=true
After the playbook runs, verify the installation.
Move any hosts that you defined in the [new_<host_type>] section to their appropriate section. By moving these hosts, subsequent playbook runs that use this inventory file treat the nodes correctly. You can keep the empty [new_<host_type>] section. For example, when adding new nodes:
[nodes] master[1:3].example.com node1.example.com openshift_node_group_name='node-config-compute' node2.example.com openshift_node_group_name='node-config-compute' node3.example.com openshift_node_group_name='node-config-compute' infra-node1.example.com openshift_node_group_name='node-config-infra' infra-node2.example.com openshift_node_group_name='node-config-infra' [new_nodes]
When you delete a node using the CLI, the node object is deleted in Kubernetes, but the pods that exist on the node itself are not deleted. Any bare pods not backed by a replication controller would be inaccessible to OpenShift Container Platform, pods backed by replication controllers would be rescheduled to other available nodes, and local manifest pods would need to be manually deleted.
To delete a node from the OpenShift Container Platform cluster:
Evacuate pods from the node you are preparing to delete.
Delete the node object:
$ oc delete node <node>
Check that the node has been removed from the node list:
$ oc get nodes
Pods should now be only scheduled for the remaining nodes that are in Ready state.
If you want to uninstall all OpenShift Container Platform content from the node host, including all pods and containers, continue to Uninstalling Nodes and follow the procedure using the uninstall.yml playbook. The procedure assumes general understanding of the cluster installation process using Ansible.
To add or update labels on a node:
$ oc label node <node> <key_1>=<value_1> ... <key_n>=<value_n>
To see more detailed usage:
$ oc label -h
To list all or selected pods on one or more nodes:
$ oc adm manage-node <node1> <node2> \
--list-pods [--pod-selector=<pod_selector>] [-o json|yaml]
To list all or selected pods on selected nodes:
$ oc adm manage-node --selector=<node_selector> \
--list-pods [--pod-selector=<pod_selector>] [-o json|yaml]
By default, healthy nodes with a Ready
status are
marked as schedulable, meaning that new pods are allowed for placement on the
node. Manually marking a node as unschedulable blocks any new pods from being
scheduled on the node. Existing pods on the node are not affected.
To mark a node or nodes as unschedulable:
$ oc adm manage-node <node1> <node2> --schedulable=false
For example:
$ oc adm manage-node node1.example.com --schedulable=false
NAME LABELS STATUS
node1.example.com kubernetes.io/hostname=node1.example.com Ready,SchedulingDisabled
To mark a currently unschedulable node or nodes as schedulable:
$ oc adm manage-node <node1> <node2> --schedulable
Alternatively, instead of specifying specific node names (e.g., <node1>
<node2>
), you can use the --selector=<node_selector>
option to mark selected
nodes as schedulable or unschedulable.
Evacuating pods allows you to migrate all or selected pods from a given node. Nodes must first be marked unschedulable to perform pod evacuation.
Only pods backed by a replication controller can be evacuated; the replication controllers create new pods on other nodes and remove the existing pods from the specified node(s). Bare pods, meaning those not backed by a replication controller, are unaffected by default. You can evacuate a subset of pods by specifying a pod-selector. Pod selector is based on labels, so all the pods with the specified label will be evacuated.
To evacuate all or selected pods on a node:
$ oc adm drain <node> [--pod-selector=<pod_selector>]
You can force deletion of bare pods by using the --force
option. When set to
true
, deletion continues even if there are pods not managed by a replication
controller, ReplicaSet, job, daemonset, or StatefulSet:
$ oc adm drain <node> --force=true
You can use --grace-period
to set a period of time in seconds for each pod to
terminate gracefully. If negative, the default value specified in the pod is
used:
$ oc adm drain <node> --grace-period=-1
You can use --ignore-daemonsets
and set it to true
to ignore
daemonset-managed pods:
$ oc adm drain <node> --ignore-daemonsets=true
You can use --timeout
to set the length of time to wait before giving up. A
value of 0
sets an infinite length of time:
$ oc adm drain <node> --timeout=5s
You can use --delete-local-data
and set it to true
to continue deletion even
if there are pods using emptyDir (local data that is deleted when the node is
drained):
$ oc adm drain <node> --delete-local-data=true
To list objects that will be migrated without actually performing the evacuation,
use the --dry-run
option and set it to true
:
$ oc adm drain <node> --dry-run=true
Instead of specifying a specific node name, you can use the
--selector=<node_selector>
option to evacuate pods on nodes that match the
selector.
To reboot a node without causing an outage for applications running on the platform, it is important to first evacuate the pods. For pods that are made highly available by the routing tier, nothing else needs to be done. For other pods needing storage, typically databases, it is critical to ensure that they can remain in operation with one pod temporarily going offline. While implementing resiliency for stateful pods is different for each application, in all cases it is important to configure the scheduler to use node anti-affinity to ensure that the pods are properly spread across available nodes.
Another challenge is how to handle nodes that are running critical infrastructure such as the router or the registry. The same node evacuation process applies, though it is important to understand certain edge cases.
Infrastructure nodes are nodes that are labeled to run pieces of the OpenShift Container Platform environment. Currently, the easiest way to manage node reboots is to ensure that there are at least three nodes available to run infrastructure. The scenario below demonstrates a common mistake that can lead to service interruptions for the applications running on OpenShift Container Platform when only two nodes are available.
Node A is marked unschedulable and all pods are evacuated.
The registry pod running on that node is now redeployed on node B. This means node B is now running both registry pods.
Node B is now marked unschedulable and is evacuated.
The service exposing the two pod endpoints on node B, for a brief period of time, loses all endpoints until they are redeployed to node A.
The same process using three infrastructure nodes does not result in a service disruption. However, due to pod scheduling, the last node that is evacuated and brought back in to rotation is left running zero registries. The other two nodes will run two and one registries respectively. The best solution is to rely on pod anti-affinity. This is an alpha feature in Kubernetes that is available for testing now, but is not yet supported for production workloads.
Pod anti-affinity is slightly different than node anti-affinity. Node anti-affinity can be violated if there are no other suitable locations to deploy a pod. Pod anti-affinity can be set to either required or preferred.
apiVersion: v1
kind: Pod
metadata:
name: with-pod-antiaffinity
spec:
affinity:
podAntiAffinity: (1)
preferredDuringSchedulingIgnoredDuringExecution: (2)
- weight: 100 (3)
podAffinityTerm:
labelSelector:
matchExpressions:
- key: docker-registry (4)
operator: In (5)
values:
- default
topologyKey: kubernetes.io/hostname
1 | Stanza to configure pod anti-affinity. |
2 | Defines a preferred rule. |
3 | Specifies a weight for a preferred rule. The node with the highest weight is preferred. |
4 | Description of the pod label that determines when the anti-affinity rule applies. Specify a key and value for the label. |
5 | The operator represents the relationship between the label on the existing pod and the set of values in the matchExpression parameters in the specification for the new pod. Can be In , NotIn , Exists , or DoesNotExist . |
This example assumes the container image registry pod has a label of
docker-registry=default
. Pod anti-affinity can use any Kubernetes match
expression.
The last required step is to enable the MatchInterPodAffinity
scheduler
predicate in /etc/origin/master/scheduler.json. With this in place, if only
two infrastructure nodes are available and one is rebooted, the container image registry
pod is prevented from running on the other node. oc get pods
reports the pod
as unready until a suitable node is available. Once a node is available and all
pods are back in ready state, the next node can be restarted.
In most cases, a pod running an OpenShift Container Platform router will expose a host port.
The PodFitsPorts
scheduler predicate ensures that no router pods using the
same port can run on the same node, and pod anti-affinity is achieved. If the
routers are relying on
IP failover
for high availability, there is nothing else that is needed. For router pods
relying on an external service such as AWS Elastic Load Balancing for high
availability, it is that service’s responsibility to react to router pod
restarts.
In rare cases, a router pod might not have a host port configured. In those cases, it is important to follow the recommended restart process for infrastructure nodes.
During installation, OpenShift Container Platform creates a configmap in the openshift-node project for each type of node group:
node-config-master
node-config-infra
node-config-compute
node-config-all-in-one
node-config-master-infra
To make configuration changes to an existing node, edit the appropriate configuration map. A sync pod on each node watches for changes in the configuration maps. During installation, the sync pods are created by using sync Daemonsets, and a /etc/origin/node/node-config.yaml file, where the node configuration parameters reside, is added to each node. When a sync pod detects configuration map change, it updates the node-config.yaml on all nodes in that node group and restarts the atomic-openshift-node.service on the appropriate nodes.
$ oc get cm -n openshift-node
NAME DATA AGE
node-config-all-in-one 1 1d
node-config-compute 1 1d
node-config-infra 1 1d
node-config-master 1 1d
node-config-master-infra 1 1d
apiVersion: v1
authConfig: (1)
authenticationCacheSize: 1000
authenticationCacheTTL: 5m
authorizationCacheSize: 1000
authorizationCacheTTL: 5m
dnsBindAddress: 127.0.0.1:53
dnsDomain: cluster.local
dnsIP: 0.0.0.0 (2)
dnsNameservers: null
dnsRecursiveResolvConf: /etc/origin/node/resolv.conf
dockerConfig:
dockerShimRootDirectory: /var/lib/dockershim
dockerShimSocket: /var/run/dockershim.sock
execHandlerName: native
enableUnidling: true
imageConfig:
format: registry.reg-aws.openshift.com/openshift3/ose-${component}:${version}
latest: false
iptablesSyncPeriod: 30s
kind: NodeConfig
kubeletArguments: (3)
bootstrap-kubeconfig:
- /etc/origin/node/bootstrap.kubeconfig
cert-dir:
- /etc/origin/node/certificates
cloud-config:
- /etc/origin/cloudprovider/aws.conf
cloud-provider:
- aws
enable-controller-attach-detach:
- 'true'
feature-gates:
- RotateKubeletClientCertificate=true,RotateKubeletServerCertificate=true
node-labels:
- node-role.kubernetes.io/compute=true
pod-manifest-path:
- /etc/origin/node/pods (4)
rotate-certificates:
- 'true'
masterClientConnectionOverrides:
acceptContentTypes: application/vnd.kubernetes.protobuf,application/json
burst: 40
contentType: application/vnd.kubernetes.protobuf
qps: 20
masterKubeConfig: node.kubeconfig
networkConfig: (5)
mtu: 8951
networkPluginName: redhat/openshift-ovs-subnet (6)
servingInfo: (7)
bindAddress: 0.0.0.0:10250
bindNetwork: tcp4
clientCA: client-ca.crt (8)
volumeConfig:
localQuota:
perFSGroup: null
volumeDirectory: /var/lib/origin/openshift.local.volumes
1 | Authentication and authorization configuration options. |
2 | IP address prepended to a pod’s /etc/resolv.conf. |
3 | Key value pairs that are passed directly to the Kubelet that match the Kubelet’s command line arguments. |
4 | The path to the pod manifest file or directory. A directory must contain one or more manifest files. OpenShift Container Platform uses the manifest files to create pods on the node. |
5 | The pod network settings on the node. |
6 | Software defined network (SDN) plug-in. Set to redhat/openshift-ovs-subnet for the ovs-subnet plug-in;
redhat/openshift-ovs-multitenant for the ovs-multitenant plug-in; or
redhat/openshift-ovs-networkpolicy for the ovs-networkpolicy plug-in. |
7 | Certificate information for the node. |
8 | Optional: PEM-encoded certificate bundle. If set, a valid client certificate must be presented and validated against the certificate authorities in the specified file before the request headers are checked for user names. |
Do not manually modify the /etc/origin/node/node-config.yaml file. |
You can configure node resources by adding kubelet arguments to the node configuration map.
Edit the configuration map:
$ oc edit cm node-config-compute -n openshift-node
Add the kubeletArguments
section and specify your options:
kubeletArguments:
max-pods: (1)
- "40"
resolv-conf: (2)
- "/etc/resolv.conf"
image-gc-high-threshold: (3)
- "90"
image-gc-low-threshold: (4)
- "80"
kube-api-qps: (5)
- "20"
kube-api-burst: (6)
- "40"
1 | Maximum number of pods that can run on this kubelet. |
2 | Resolver configuration file used as the basis for the container DNS resolution configuration. |
3 | The percent of disk usage after which image garbage collection is always run. Default: 90% |
4 | The percent of disk usage before which image garbage collection is never run. Lowest disk usage to garbage collect to. Default: 80% |
5 | The queries per second (QPS) to use while talking with the Kubernetes API server. |
6 | The burst to use while talking with the Kubernetes API server. |
To view all available kubelet options:
$ hyperkube kubelet -h
See the Cluster maximums page for the maximum supported limits for each version of OpenShift Container Platform. |
In the /etc/origin/node/node-config.yaml file, one parameter controls the
maximum number of pods that can be scheduled to a node: max-pods
.
When the max-pods
option is in use, it limits the number
of pods on a node. Exceeding this value can result in:
Increased CPU utilization on both OpenShift Container Platform and Docker.
Slow pod scheduling.
Potential out-of-memory scenarios (depends on the amount of memory in the node).
Exhausting the pool of IP addresses.
Resource overcommitting, leading to poor user application performance.
In Kubernetes, a pod that is holding a single container actually uses two containers. The second container is used to set up networking prior to the actual container starting. Therefore, a system running 10 pods will actually have 20 containers running. |
max-pods
sets the number of pods the node can run to a fixed value, regardless
of the properties of the node.
Cluster
Limits documents maximum supported values for max-pods
.
kubeletArguments:
max-pods:
- "250"
Using the above example, the default value for max-pods
is 250
.
As you download container images and run and delete containers, Docker does not always free up mapped disk space. As a result, over time you can run out of space on a node, which might prevent OpenShift Container Platform from being able to create new pods or cause pod creation to take several minutes.
For example, the following shows pods that are still in the ContainerCreating
state after six minutes and the events log shows a FailedSync event.
$ oc get pod
NAME READY STATUS RESTARTS AGE
cakephp-mysql-persistent-1-build 0/1 ContainerCreating 0 6m
mysql-1-9767d 0/1 ContainerCreating 0 2m
mysql-1-deploy 0/1 ContainerCreating 0 6m
$ oc get events
LASTSEEN FIRSTSEEN COUNT NAME KIND SUBOBJECT TYPE REASON SOURCE MESSAGE
6m 6m 1 cakephp-mysql-persistent-1-build Pod Normal Scheduled default-scheduler Successfully assigned cakephp-mysql-persistent-1-build to ip-172-31-71-195.us-east-2.compute.internal
2m 5m 4 cakephp-mysql-persistent-1-build Pod Warning FailedSync kubelet, ip-172-31-71-195.us-east-2.compute.internal Error syncing pod
2m 4m 4 cakephp-mysql-persistent-1-build Pod Normal SandboxChanged kubelet, ip-172-31-71-195.us-east-2.compute.internal Pod sandbox changed, it will be killed and re-created.
One solution to this problem is to reset Docker storage to remove artifacts not needed by Docker.
On the node where you want to restart Docker storage:
Run the following command to mark the node as unschedulable:
$ oc adm manage-node <node> --schedulable=false
Run the following command to shut down Docker and the atomic-openshift-node service:
$ systemctl stop docker atomic-openshift-node
Run the following command to remove the local volume directory:
$ rm -rf /var/lib/origin/openshift.local.volumes
This command clears the local image cache. As a result, images, including ose-*
images, will need to be re-pulled.
This might result in slower pod start times while the image store recovers.
Remove the /var/lib/docker directory:
$ rm -rf /var/lib/docker
Run the following command to reset the Docker storage:
$ docker-storage-setup --reset
Run the following command to recreate the Docker storage:
$ docker-storage-setup
Recreate the /var/lib/docker directory:
$ mkdir /var/lib/docker
Run the following command to restart Docker and the atomic-openshift-node service:
$ systemctl start docker atomic-openshift-node
Restart the node service by rebooting the host:
# systemctl restart atomic-openshift-node.service
Run the following command to mark the node as schedulable:
$ oc adm manage-node <node> --schedulable=true