This is a cache of https://docs.openshift.com/rosa/cloud_experts_tutorials/cloud-experts-deploying-application/cloud-experts-deploying-application-scaling.html. It is a snapshot of the page at 2024-11-27T03:03:56.753+0000.
Scaling an application - Deploying an application | Tutorials | Red Hat OpenShift Service on AWS
×

Scaling

You can manually or automatically scale your pods by using the Horizontal Pod Autoscaler (HPA). You can also scale your cluster nodes.

Manual pod scaling

You can manually scale your application’s pods by using one of the following methods:

  • Changing your ReplicaSet or deployment definition

  • Using the command line

  • Using the web console

This workshop starts by using only one pod for the microservice. By defining a replica of 1 in your deployment definition, the Kubernetes Replication Controller strives to keep one pod alive. You then learn how to define pod autoscaling by using the Horizontal Pod Autoscaler(HPA) which is based on the load and will scale out more pods, beyond your initial definition, if high load is experienced.

Prerequisites
  • An active ROSA cluster

  • A deloyed the OSToy application

Procedure
  1. In the OSToy app, click the Networking tab in the navigational menu.

  2. In the "Intra-cluster Communication" section, locate the box located beneath "Remote Pods" that randomly changes colors. Inside the box, you see the microservice’s pod name. There is only one box in this example because there is only one microservice pod.

    HPA Menu
  3. Confirm that there is only one pod running for the microservice by running the following command:

    $ oc get pods
    Example output
    NAME                                  READY     STATUS    RESTARTS   AGE
    ostoy-frontend-679cb85695-5cn7x       1/1       Running   0          1h
    ostoy-microservice-86b4c6f559-p594d   1/1       Running   0          1h
  4. Download the ostoy-microservice-deployment.yaml and save it to your local machine.

  5. Change the deployment definition to three pods instead of one by using the following example:

    spec:
        selector:
          matchLabels:
            app: ostoy-microservice
        replicas: 3
  6. Apply the replica changes by running the following command:

    $ oc apply -f ostoy-microservice-deployment.yaml

    You can also edit the ostoy-microservice-deployment.yaml file in the OpenShift Web Console by going to the Workloads > Deployments > ostoy-microservice > YAML tab.

  7. Confirm that there are now 3 pods by running the following command:

    $ oc get pods

    The output shows that there are now 3 pods for the microservice instead of only one.

    Example output
    NAME                                  READY   STATUS    RESTARTS   AGE
    ostoy-frontend-5fbcc7d9-rzlgz         1/1     Running   0          26m
    ostoy-microservice-6666dcf455-2lcv4   1/1     Running   0          81s
    ostoy-microservice-6666dcf455-5z56w   1/1     Running   0          81s
    ostoy-microservice-6666dcf455-tqzmn   1/1     Running   0          26m
  8. Scale the application by using the CLI or by using the web UI:

    • In the CLI, decrease the number of pods from 3 to 2 by running the following command:

      $ oc scale deployment ostoy-microservice --replicas=2
    • From the navigational menu of the OpenShift web console UI, click Workloads > Deployments > ostoy-microservice.

    • On the left side of the page, locate the blue circle with a "3 Pod" label in the middle.

    • Selecting the arrows next to the circle scales the number of pods. Select the down arrow to 2.

      UI Scale
Verification

Check your pod counts by using the CLI, the web UI, or the OSToy app:

  • From the CLI, confirm that you are using two pods for the microservice by running the following command:

    $ oc get pods
    Example output
    NAME                                  READY   STATUS    RESTARTS   AGE
    ostoy-frontend-5fbcc7d9-rzlgz         1/1     Running   0          75m
    ostoy-microservice-6666dcf455-2lcv4   1/1     Running   0          50m
    ostoy-microservice-6666dcf455-tqzmn   1/1     Running   0          75m
  • In the web UI, select Workloads > Deployments > ostoy-microservice.

    Verify the workload pods
  • You can also confirm that there are two pods in use by selecting Networking in the navigational menu of the OSToy app. There should be two colored boxes for the two pods.

    UI Scale

Pod Autoscaling

Red Hat OpenShift Service on AWS offers a Horizontal Pod Autoscaler (HPA). The HPA uses metrics to increase or decrease the number of pods when necessary.

Procedure
  1. From the navigational menu of the web UI, select Pod Auto Scaling.

    HPA Menu
  2. Create the HPA by running the following command:

    $ oc autoscale deployment/ostoy-microservice --cpu-percent=80 --min=1 --max=10

    This command creates an HPA that maintains between 1 and 10 replicas of the pods controlled by the ostoy-microservice deployment. Thoughout deployment, HPA increases and decreases the number of replicas to keep the average CPU use across all pods at 80% and 40 millicores.

  3. On the Pod Auto Scaling > Horizontal Pod Autoscaling page, select Increase the load.

    Because increasing the load generates CPU intensive calculations, the page can become unresponsive. This is an expected response. Click Increase the Load only once. For more information about the process, see the microservice’s GitHub repository.

    After a few minutes, the new pods display on the page represented by colored boxes.

    The page can experience lag.

Verification

Check your pod counts with one of the following methods:

  • In the OSToy application’s web UI, see the remote pods box:

    HPA Main

    Because there is only one pod, increasing the workload should trigger an increase of pods.

  • In the CLI, run the following command:

    oc get pods --field-selector=status.phase=Running | grep microservice
    Example output
    ostoy-microservice-79894f6945-cdmbd   1/1     Running   0          3m14s
    ostoy-microservice-79894f6945-mgwk7   1/1     Running   0          4h24m
    ostoy-microservice-79894f6945-q925d   1/1     Running   0          3m14s
  • You can also verify autoscaling from the OpenShift Cluster Manager

    1. In the OpenShift web console navigational menu, click Observe > Dashboards.

    2. In the dashboard, select Kubernetes / Compute Resources / Namespace (Pods) and your namespace ostoy.

      Select metrics
    3. A graph appears showing your resource usage across CPU and memory. The top graph shows recent CPU consumption per pod and the lower graph indicates memory usage. The following lists the callouts in the graph:

      1. The load increased (A).

      2. Two new pods were created (B and C).

      3. The thickness of each graph represents the CPU consumption and indicates which pods handled more load.

      4. The load decreased (D), and the pods were deleted.

        Select metrics

Node Autoscaling

Red Hat OpenShift Service on AWS allows you to use node autoscaling. In this scenario, you will create a new project with a job that has a large workload that the cluster cannot handle. With autoscaling enabled, when the load is larger than your current capacity, the cluster will automatically create new nodes to handle the load.

Prerequisites
  • Autoscaling is enabled on your machine pools.

Procedure
  1. Create a new project called autoscale-ex by running the following command:

    $ oc new-project autoscale-ex
  2. Create the job by running the following command:

    $ oc create -f https://raw.githubusercontent.com/openshift-cs/rosaworkshop/master/rosa-workshop/ostoy/yaml/job-work-queue.yaml
  3. After a few minuts, run the following command to see the pods:

    $ oc get pods
    Example output
    NAME                     READY   STATUS    RESTARTS   AGE
    work-queue-5x2nq-24xxn   0/1     Pending   0          10s
    work-queue-5x2nq-57zpt   0/1     Pending   0          10s
    work-queue-5x2nq-58bvs   0/1     Pending   0          10s
    work-queue-5x2nq-6c5tl   1/1     Running   0          10s
    work-queue-5x2nq-7b84p   0/1     Pending   0          10s
    work-queue-5x2nq-7hktm   0/1     Pending   0          10s
    work-queue-5x2nq-7md52   0/1     Pending   0          10s
    work-queue-5x2nq-7qgmp   0/1     Pending   0          10s
    work-queue-5x2nq-8279r   0/1     Pending   0          10s
    work-queue-5x2nq-8rkj2   0/1     Pending   0          10s
    work-queue-5x2nq-96cdl   0/1     Pending   0          10s
    work-queue-5x2nq-96tfr   0/1     Pending   0          10s
  4. Because there are many pods in a Pending state, this status should trigger the autoscaler to create more nodes in your machine pool. Allow time to create these worker nodes.

  5. After a few minutes, use the following command to see how many worker nodes you now have:

    $ oc get nodes
    Example output
    NAME                                         STATUS   ROLES          AGE     VERSION
    ip-10-0-138-106.us-west-2.compute.internal   Ready    infra,worker   22h     v1.23.5+3afdacb
    ip-10-0-153-68.us-west-2.compute.internal    Ready    worker         2m12s   v1.23.5+3afdacb
    ip-10-0-165-183.us-west-2.compute.internal   Ready    worker         2m8s    v1.23.5+3afdacb
    ip-10-0-176-123.us-west-2.compute.internal   Ready    infra,worker   22h     v1.23.5+3afdacb
    ip-10-0-195-210.us-west-2.compute.internal   Ready    master         23h     v1.23.5+3afdacb
    ip-10-0-196-84.us-west-2.compute.internal    Ready    master         23h     v1.23.5+3afdacb
    ip-10-0-203-104.us-west-2.compute.internal   Ready    worker         2m6s    v1.23.5+3afdacb
    ip-10-0-217-202.us-west-2.compute.internal   Ready    master         23h     v1.23.5+3afdacb
    ip-10-0-225-141.us-west-2.compute.internal   Ready    worker         23h     v1.23.5+3afdacb
    ip-10-0-231-245.us-west-2.compute.internal   Ready    worker         2m11s   v1.23.5+3afdacb
    ip-10-0-245-27.us-west-2.compute.internal    Ready    worker         2m8s    v1.23.5+3afdacb
    ip-10-0-245-7.us-west-2.compute.internal     Ready    worker         23h     v1.23.5+3afdacb

    You can see the worker nodes were automatically created to handle the workload.

  6. Return to the OSToy app by entering the following command:

    $ oc project ostoy