This is a cache of https://docs.openshift.com/enterprise/3.1/dev_guide/quota.html. It is a snapshot of the page at 2024-11-26T04:37:29.284+0000.
Quota | Developer Guide | OpenShift Enterprise 3.1
×

Overview

A ResourceQuota object enumerates hard resource usage limits per project. It limits the total number of a particular type of object that may be created in a project, and the total amount of compute resources that may be consumed by resources in that project.

Usage Limits

The following describes the set of limits that may be enforced by a ResourceQuota.

Table 1. Usage limits
Resource Name Description

cpu

Total requested cpu usage across all containers

memory

Total requested memory usage across all containers

pods

Total number of pods

replicationcontrollers

Total number of replication controllers

resourcequotas

Total number of resource quotas

services

Total number of services

secrets

Total number of secrets

persistentvolumeclaims

Total number of persistent volume claims

Quota Enforcement

After a project quota is first created, the project restricts the ability to create any new resources that may violate a quota constraint until it has calculated updated usage statistics.

Once a quota is created and usage statistics are up-to-date, the project accepts the creation of new content. When you create or modify resources, your quota usage is incremented immediately upon the request to create or modify the resource. When you delete a resource, your quota use is decremented during the next full recalculation of quota statistics for the project. When you delete resources, a configurable amount of time determines how long it takes to reduce quota usage statistics to their current observed system value.

If project modifications exceed a quota usage limit, the server denies the action, and an appropriate error message is returned to the end-user explaining the quota constraint violated, and what their currently observed usage stats are in the system.

Compute Resources: Requests vs Limits

When allocating compute resources, each container may specify a request and a limit value for either CPU or memory. The quota tracking system will only cost the request value against the quota usage. If a resource is tracked by quota, and no request value is provided, the associated entity is rejected as part of admission into the cluster.

For an example, consider the following scenarios relative to tracking quota on CPU:

Table 2. Quota tracking based on container requests
Pod Container Requests[CPU] Limits[CPU] Result

pod-x

a

100m

500m

The quota usage is incremented 100m

pod-y

b

100m

none

The quota usage is incremented 100m

pod-y

c

none

500m

The quota usage is incremented 500m since request will default to limit.

pod-z

d

none

none

The pod is rejected since it does not enumerate a request.

The rationale for charging quota usage for the requested amount of a resource versus the limit is the belief that a user should only be charged for what they are scheduled against in the cluster.

As a consequence, the user is able to spread its usage of a resource across multiple tiers of service. Let’s demonstrate this via an example with a 4 cpu quota.

The quota may be allocated as follows:

Table 3. Quota with Burstable resources
Pod Container Requests[CPU] Limits[CPU] Quality of Service Tier Quota Usage

pod-x

a

1

4

Burstable

1

pod-y

b

2

2

Guaranteed

2

pod-z

c

1

3

Burstable

1

The user is restricted from using BestEffort CPU containers, but at any point in time, the containers on the node may consume up to 9 CPU cores over a given time period if there is no contention on the node because it has Burstable containers. If one wants to restrict the ratio between the request and limit, it is encouraged that the user define a LimitRange with MaxLimitRequestRatio to control burst out behavior. This would in effect, let an administrator keep the difference between request and limit more in line with tracked usage if desired.

Sample Resource Quota File

resource-quota.json

{
  "apiVersion": "v1",
  "kind": "ResourceQuota",
  "metadata": {
    "name": "quota" (1)
  },
  "spec": {
    "hard": {
      "memory": "1Gi", (2)
      "cpu": "20", (3)
      "pods": "10", (4)
      "services": "5", (5)
      "replicationcontrollers":"5", (6)
      "resourcequotas":"1" (7)
    }
  }
}
1 The name of this quota document
2 The total amount of memory requested across all containers may not exceed 1Gi.
3 The total number of cpu requested across all containers may not exceed 20 Kubernetes compute units.
4 The total number of pods in the project
5 The total number of services in the project
6 The total number of replication controllers in the project
7 The total number of resource quota documents in the project

Create a Quota

To apply a quota to a project:

$ oc create -f resource-quota.json

View a Quota

To view usage statistics related to any hard limits defined in your quota:

$ oc get quota
NAME
quota
$ oc describe quota quota
Name:                   quota
Resource                Used    Hard
--------                ----    ----
cpu                     5       20
memory                  500Mi   1Gi
pods                    5       10
replicationcontrollers  5       5
resourcequotas          1       1
services                3       5

Configuring the Quota Synchronization Period

When a set of resources are deleted, the synchronization timeframe of resources is determined by the resource-quota-sync-period setting in the /etc/origin/master/master-config.yaml file. Before your quota usage is restored, you may encounter problems when attempting to reuse the resources. Change the resource-quota-sync-period setting to have the set of resources regenerate at the desired amount of time (in seconds) and for the resources to be available again:

kubernetesMasterConfig:
  apiLevels:
  - v1beta3
  - v1
  apiServerArguments: null
  controllerArguments:
    resource-quota-sync-period:
      - "10s"

Adjusting the regeneration time can be helpful for creating resources and determining resource usage when automation is used.

The resource-quota-sync-period setting is designed to balance system performance. Reducing the sync period can result in a heavy load on the master.

Accounting for Quota in Deployment Configurations

If a quota has been defined for your project, see Deployment Resources for considerations on any deployment configurations.