This is a cache of https://docs.openshift.com/container-platform/4.5/serverless/integrations/gpu-resources.html. It is a snapshot of the page at 2024-11-26T01:15:04.736+0000.
Using NVIDIA GPU resources with serverless applications - Integrations | Serverless | OpenShift Container Platform 4.5
×

NVIDIA supports experimental use of GPU resources on OpenShift Container Platform. See OpenShift Container Platform on NVIDIA GPU accelerated clusters for more information about setting up GPU resources on OpenShift Container Platform.

After GPU resources are enabled for your OpenShift Container Platform cluster, you can specify GPU requirements for a Knative service using the kn CLI.

Procedure

You can specify a GPU resource requirement when you create a Knative service using kn.

  1. Create a service.

  2. Set the GPU resource requirement limit to 1 by using nvidia.com/gpu=1:

    $ kn service create hello --image docker.io/knativesamples/hellocuda-go --limit nvidia.com/gpu=1

    A GPU resource requirement limit of 1 means that the service has 1 GPU resource dedicated. services do not share GPU resources. Any other services that require GPU resources must wait until the GPU resource is no longer in use.

    A limit of 1 GPU also means that applications exceeding usage of 1 GPU resource are restricted. If a service requests more than 1 GPU resource, it is deployed on a node where the GPU resource requirements can be met.

Updating GPU requirements for a Knative service using kn
  • Update the service. Change the GPU resource requirement limit to 3 by using nvidia.com/gpu=3:

$ kn service update hello --limit nvidia.com/gpu=3

Additional resources