$ oc edit machineset <machine_set_name> -n openshift-machine-api
The guidance in this section is only relevant for installations with cloud provider integration. These guidelines apply to OpenShift Container Platform with software-defined networking (SDN), not Open Virtual Network (OVN). |
Apply the following best practices to scale the number of worker machines in your OpenShift Container Platform cluster. You scale the worker machines by increasing or decreasing the number of replicas that are defined in the worker machine set.
When scaling up the cluster to higher node counts:
Spread nodes across all of the available zones for higher availability.
Scale up by no more than 25 to 50 machines at once.
Consider creating new machine sets in each available zone with alternative instance types of similar size to help mitigate any periodic provider capacity constraints. For example, on AWS, use m5.large and m5d.large.
Cloud providers might implement a quota for API services. Therefore, gradually scale the cluster. |
The controller might not be able to create the machines if the replicas in the machine sets are set to higher numbers all at one time. The number of requests the cloud platform, which OpenShift Container Platform is deployed on top of, is able to handle impacts the process. The controller will start to query more while trying to create, check, and update the machines with the status. The cloud platform on which OpenShift Container Platform is deployed has API request limits and excessive queries might lead to machine creation failures due to cloud platform limitations.
Enable machine health checks when scaling to large node counts. In case of failures, the health checks monitor the condition and automatically repair unhealthy machines.
When scaling large and dense clusters to lower node counts, it might take large
amounts of time as the process involves draining or evicting the objects running on
the nodes being terminated in parallel. Also, the client might start to throttle the
requests if there are too many objects to evict. The default client QPS and burst
rates are currently set to |
When you modify a machine set, your changes only apply to machines that are created after you save the updated MachineSet
custom resource (CR).
The changes do not affect existing machines.
You can replace the existing machines with new ones that reflect the updated configuration by scaling the machine set.
If you need to scale a machine set without making other changes, you do not need to delete the machines.
By default, the OpenShift Container Platform router pods are deployed on machines.
Because the router is required to access some cluster resources, including the web console, do not scale the machine set to |
Your OpenShift Container Platform cluster uses the Machine API.
You are logged in to the cluster as an administrator by using the OpenShift CLI (oc
).
Edit the machine set:
$ oc edit machineset <machine_set_name> -n openshift-machine-api
Note the value of the spec.replicas
field, as you need it when scaling the machine set to apply the changes.
apiVersion: machine.openshift.io/v1beta1
kind: MachineSet
metadata:
name: <machine_set_name>
namespace: openshift-machine-api
spec:
replicas: 2 (1)
# ...
1 | The examples in this procedure show a machine set that has a replicas value of 2 . |
Update the machine set CR with the configuration options that you want and save your changes.
List the machines that are managed by the updated machine set by running the following command:
$ oc get -n openshift-machine-api machines -l machine.openshift.io/cluster-api-machineset=<machine_set_name>
NAME PHASE TYPE REGION ZONE AGE
<machine_name_original_1> Running m6i.xlarge us-west-1 us-west-1a 4h
<machine_name_original_2> Running m6i.xlarge us-west-1 us-west-1a 4h
For each machine that is managed by the updated machine set, set the delete
annotation by running the following command:
$ oc annotate machine/<machine_name_original_1> \
-n openshift-machine-api \
machine.openshift.io/delete-machine="true"
Scale the machine set to twice the number of replicas by running the following command:
$ oc scale --replicas=4 \(1)
machineset <machine_set_name> \
-n openshift-machine-api
1 | The original example value of 2 is doubled to 4 . |
List the machines that are managed by the updated machine set by running the following command:
$ oc get -n openshift-machine-api machines -l machine.openshift.io/cluster-api-machineset=<machine_set_name>
NAME PHASE TYPE REGION ZONE AGE
<machine_name_original_1> Running m6i.xlarge us-west-1 us-west-1a 4h
<machine_name_original_2> Running m6i.xlarge us-west-1 us-west-1a 4h
<machine_name_updated_1> Provisioned m6i.xlarge us-west-1 us-west-1a 55s
<machine_name_updated_2> Provisioning m6i.xlarge us-west-1 us-west-1a 55s
When the new machines are in the Running
phase, you can scale the machine set to the original number of replicas.
Scale the machine set to the original number of replicas by running the following command:
$ oc scale --replicas=2 \(1)
machineset <machine_set_name> \
-n openshift-machine-api
1 | The original example value of 2 . |
To verify that the machines without the updated configuration are deleted, list the machines that are managed by the updated machine set by running the following command:
$ oc get -n openshift-machine-api machines -l machine.openshift.io/cluster-api-machineset=<machine_set_name>
NAME PHASE TYPE REGION ZONE AGE
<machine_name_original_1> Deleting m6i.xlarge us-west-1 us-west-1a 4h
<machine_name_original_2> Deleting m6i.xlarge us-west-1 us-west-1a 4h
<machine_name_updated_1> Running m6i.xlarge us-west-1 us-west-1a 5m41s
<machine_name_updated_2> Running m6i.xlarge us-west-1 us-west-1a 5m41s
NAME PHASE TYPE REGION ZONE AGE
<machine_name_updated_1> Running m6i.xlarge us-west-1 us-west-1a 6m30s
<machine_name_updated_2> Running m6i.xlarge us-west-1 us-west-1a 6m30s
To verify that a machine created by the updated machine set has the correct configuration, examine the relevant fields in the CR for one of the new machines by running the following command:
$ oc describe machine <machine_name_updated_1> -n openshift-machine-api
Machine health checks automatically repair unhealthy machines in a particular machine pool.
To monitor machine health, create a resource to define the configuration for a controller. Set a condition to check, such as staying in the NotReady
status for five minutes or displaying a permanent condition in the node-problem-detector, and a label for the set of machines to monitor.
You cannot apply a machine health check to a machine with the master role. |
The controller that observes a MachineHealthCheck
resource checks for the defined condition. If a machine fails the health check, the machine is automatically deleted and one is created to take its place. When a machine is deleted, you see a machine deleted
event.
To limit disruptive impact of the machine deletion, the controller drains and deletes only one node at a time. If there are more unhealthy machines than the maxUnhealthy
threshold allows for in the targeted pool of machines, remediation stops and therefore enables manual intervention.
Consider the timeouts carefully, accounting for workloads and requirements.
|
To stop the check, remove the resource.
There are limitations to consider before deploying a machine health check:
Only machines owned by a machine set are remediated by a machine health check.
Control plane machines are not currently supported and are not remediated if they are unhealthy.
If the node for a machine is removed from the cluster, a machine health check considers the machine to be unhealthy and remediates it immediately.
If the corresponding node for a machine does not join the cluster after the nodeStartupTimeout
, the machine is remediated.
A machine is remediated immediately if the Machine
resource phase is Failed
.
The MachineHealthCheck
resource for all cloud-based installation types, and other than bare metal, resembles the following YAML file:
apiVersion: machine.openshift.io/v1beta1
kind: MachineHealthCheck
metadata:
name: example (1)
namespace: openshift-machine-api
spec:
selector:
matchLabels:
machine.openshift.io/cluster-api-machine-role: <role> (2)
machine.openshift.io/cluster-api-machine-type: <role> (2)
machine.openshift.io/cluster-api-machineset: <cluster_name>-<label>-<zone> (3)
unhealthyConditions:
- type: "Ready"
timeout: "300s" (4)
status: "False"
- type: "Ready"
timeout: "300s" (4)
status: "Unknown"
maxUnhealthy: "40%" (5)
nodeStartupTimeout: "10m" (6)
1 | Specify the name of the machine health check to deploy. |
2 | Specify a label for the machine pool that you want to check. |
3 | Specify the machine set to track in <cluster_name>-<label>-<zone> format. For example, prod-node-us-east-1a . |
4 | Specify the timeout duration for a node condition. If a condition is met for the duration of the timeout, the machine will be remediated. Long timeouts can result in long periods of downtime for a workload on an unhealthy machine. |
5 | Specify the amount of machines allowed to be concurrently remediated in the targeted pool. This can be set as a percentage or an integer. If the number of unhealthy machines exceeds the limit set by maxUnhealthy , remediation is not performed. |
6 | Specify the timeout duration that a machine health check must wait for a node to join the cluster before a machine is determined to be unhealthy. |
The |
Short circuiting ensures that machine health checks remediate machines only when the cluster is healthy.
Short-circuiting is configured through the maxUnhealthy
field in the MachineHealthCheck
resource.
If the user defines a value for the maxUnhealthy
field, before remediating any machines, the MachineHealthCheck
compares the value of maxUnhealthy
with the number of machines within its target pool that it has determined to be unhealthy. Remediation is not performed if the number of unhealthy machines exceeds the maxUnhealthy
limit.
If |
The appropriate maxUnhealthy
value depends on the scale of the cluster you deploy and how many machines the MachineHealthCheck
covers. For example, you can use the maxUnhealthy
value to cover multiple machine sets across multiple availability zones so that if you lose an entire zone, your maxUnhealthy
setting prevents further remediation within the cluster. In global Azure regions that do not have multiple availability zones, you can use availability sets to ensure high availability.
The maxUnhealthy
field can be set as either an integer or percentage.
There are different remediation implementations depending on the maxUnhealthy
value.
If maxUnhealthy
is set to 2
:
Remediation will be performed if 2 or fewer nodes are unhealthy
Remediation will not be performed if 3 or more nodes are unhealthy
These values are independent of how many machines are being checked by the machine health check.
If maxUnhealthy
is set to 40%
and there are 25 machines being checked:
Remediation will be performed if 10 or fewer nodes are unhealthy
Remediation will not be performed if 11 or more nodes are unhealthy
If maxUnhealthy
is set to 40%
and there are 6 machines being checked:
Remediation will be performed if 2 or fewer nodes are unhealthy
Remediation will not be performed if 3 or more nodes are unhealthy
The allowed number of machines is rounded down when the percentage of |
You can create a MachineHealthCheck
resource for all MachineSets
in your cluster.
You should not create a MachineHealthCheck
resource that targets control plane machines.
Install the oc
command line interface.
Create a healthcheck.yml
file that contains the definition of your machine health check.
Apply the healthcheck.yml
file to your cluster:
$ oc apply -f healthcheck.yml