This is a cache of https://docs.openshift.com/container-platform/4.8/serverless/admin_guide/serverless-configuration.html. It is a snapshot of the page at 2024-11-25T21:32:53.311+0000.
Global configuration - Administer | Serverless | OpenShift Container Platform 4.8
×

The OpenShift Serverless Operator manages the global configuration of a Knative installation, including propagating values from the KnativeServing and KnativeEventing custom resources to system config maps. Any updates to config maps which are applied manually are overwritten by the Operator. However, modifying the Knative custom resources allows you to set values for these config maps.

Knative has multiple config maps that are named with the prefix config-. All Knative config maps are created in the same namespace as the custom resource that they apply to. For example, if the KnativeServing custom resource is created in the knative-serving namespace, all Knative Serving config maps are also created in this namespace.

The spec.config in the Knative custom resources have one <name> entry for each config map, named config-<name>, with a value which is be used for the config map data.

Configuring the default channel implementation

You can use the default-ch-webhook config map to specify the default channel implementation of Knative Eventing. You can specify the default channel implementation for the entire cluster or for one or more namespaces. Currently the InMemoryChannel and KafkaChannel channel types are supported.

Prerequisites
  • You have administrator permissions on OpenShift Container Platform.

  • You have installed the OpenShift Serverless Operator and Knative Eventing on your cluster.

  • If you want to use Kafka channels as the default channel implementation, you must also install the KnativeKafka CR on your cluster.

Procedure
  • Modify the KnativeEventing custom resource to add configuration details for the default-ch-webhook config map:

    apiVersion: operator.knative.dev/v1beta1
    kind: KnativeEventing
    metadata:
      name: knative-eventing
      namespace: knative-eventing
    spec:
      config: (1)
        default-ch-webhook: (2)
          default-ch-config: |
            clusterDefault: (3)
              apiVersion: messaging.knative.dev/v1
              kind: InMemoryChannel
              spec:
                delivery:
                  backoffDelay: PT0.5S
                  backoffPolicy: exponential
                  retry: 5
            namespaceDefaults: (4)
              my-namespace:
                apiVersion: messaging.knative.dev/v1beta1
                kind: KafkaChannel
                spec:
                  numPartitions: 1
                  replicationFactor: 1
    1 In spec.config, you can specify the config maps that you want to add modified configurations for.
    2 The default-ch-webhook config map can be used to specify the default channel implementation for the cluster or for one or more namespaces.
    3 The cluster-wide default channel type configuration. In this example, the default channel implementation for the cluster is InMemoryChannel.
    4 The namespace-scoped default channel type configuration. In this example, the default channel implementation for the my-namespace namespace is KafkaChannel.

    Configuring a namespace-specific default overrides any cluster-wide settings.

Configuring the default broker backing channel

If you are using a channel-based broker, you can set the default backing channel type for the broker to either InMemoryChannel or KafkaChannel.

Prerequisites
  • You have administrator permissions on OpenShift Container Platform.

  • You have installed the OpenShift Serverless Operator and Knative Eventing on your cluster.

  • You have installed the OpenShift (oc) CLI.

  • If you want to use Kafka channels as the default backing channel type, you must also install the KnativeKafka CR on your cluster.

Procedure
  1. Modify the KnativeEventing custom resource (CR) to add configuration details for the config-br-default-channel config map:

    apiVersion: operator.knative.dev/v1beta1
    kind: KnativeEventing
    metadata:
      name: knative-eventing
      namespace: knative-eventing
    spec:
      config: (1)
        config-br-default-channel:
          channel-template-spec: |
            apiVersion: messaging.knative.dev/v1beta1
            kind: KafkaChannel (2)
            spec:
              numPartitions: 6 (3)
              replicationFactor: 3 (4)
    1 In spec.config, you can specify the config maps that you want to add modified configurations for.
    2 The default backing channel type configuration. In this example, the default channel implementation for the cluster is KafkaChannel.
    3 The number of partitions for the Kafka channel that backs the broker.
    4 The replication factor for the Kafka channel that backs the broker.
  2. Apply the updated KnativeEventing CR:

    $ oc apply -f <filename>

Configuring the default broker class

You can use the config-br-defaults config map to specify default broker class settings for Knative Eventing. You can specify the default broker class for the entire cluster or for one or more namespaces. Currently the MTChannelBasedBroker and Kafka broker types are supported.

Prerequisites
  • You have administrator permissions on OpenShift Container Platform.

  • You have installed the OpenShift Serverless Operator and Knative Eventing on your cluster.

  • If you want to use Kafka broker as the default broker implementation, you must also install the KnativeKafka CR on your cluster.

Procedure
  • Modify the KnativeEventing custom resource to add configuration details for the config-br-defaults config map:

    apiVersion: operator.knative.dev/v1beta1
    kind: KnativeEventing
    metadata:
      name: knative-eventing
      namespace: knative-eventing
    spec:
      defaultBrokerClass: Kafka (1)
      config: (2)
        config-br-defaults: (3)
          default-br-config: |
            clusterDefault: (4)
              brokerClass: Kafka
              apiVersion: v1
              kind: ConfigMap
              name: kafka-broker-config (5)
              namespace: knative-eventing (6)
            namespaceDefaults: (7)
              my-namespace:
                brokerClass: MTChannelBasedBroker
                apiVersion: v1
                kind: ConfigMap
                name: config-br-default-channel (8)
                namespace: knative-eventing (9)
    ...
    1 The default broker class for Knative Eventing.
    2 In spec.config, you can specify the config maps that you want to add modified configurations for.
    3 The config-br-defaults config map specifies the default settings for any broker that does not specify spec.config settings or a broker class.
    4 The cluster-wide default broker class configuration. In this example, the default broker class implementation for the cluster is Kafka.
    5 The kafka-broker-config config map specifies default settings for the Kafka broker. See "Configuring Kafka broker settings" in the "Additional resources" section.
    6 The namespace where the kafka-broker-config config map exists.
    7 The namespace-scoped default broker class configuration. In this example, the default broker class implementation for the my-namespace namespace is MTChannelBasedBroker. You can specify default broker class implementations for multiple namespaces.
    8 The config-br-default-channel config map specifies the default backing channel for the broker. See "Configuring the default broker backing channel" in the "Additional resources" section.
    9 The namespace where the config-br-default-channel config map exists.

    Configuring a namespace-specific default overrides any cluster-wide settings.

Enabling scale-to-zero

Knative Serving provides automatic scaling, or autoscaling, for applications to match incoming demand. You can use the enable-scale-to-zero spec to enable or disable scale-to-zero globally for applications on the cluster.

Prerequisites
  • You have installed OpenShift Serverless Operator and Knative Serving on your cluster.

  • You have cluster administrator permissions.

  • You are using the default Knative Pod Autoscaler. The scale to zero feature is not available if you are using the Kubernetes Horizontal Pod Autoscaler.

Procedure
  • Modify the enable-scale-to-zero spec in the KnativeServing custom resource (CR):

    Example KnativeServing CR
    apiVersion: operator.knative.dev/v1beta1
    kind: KnativeServing
    metadata:
      name: knative-serving
    spec:
      config:
        autoscaler:
          enable-scale-to-zero: "false" (1)
    1 The enable-scale-to-zero spec can be either "true" or "false". If set to true, scale-to-zero is enabled. If set to false, applications are scaled down to the configured minimum scale bound. The default value is "true".

Configuring the scale-to-zero grace period

Knative Serving provides automatic scaling down to zero pods for applications. You can use the scale-to-zero-grace-period spec to define an upper bound time limit that Knative waits for scale-to-zero machinery to be in place before the last replica of an application is removed.

Prerequisites
  • You have installed OpenShift Serverless Operator and Knative Serving on your cluster.

  • You have cluster administrator permissions.

  • You are using the default Knative Pod Autoscaler. The scale to zero feature is not available if you are using the Kubernetes Horizontal Pod Autoscaler.

Procedure
  • Modify the scale-to-zero-grace-period spec in the KnativeServing custom resource (CR):

    Example KnativeServing CR
    apiVersion: operator.knative.dev/v1beta1
    kind: KnativeServing
    metadata:
      name: knative-serving
    spec:
      config:
        autoscaler:
          scale-to-zero-grace-period: "30s" (1)
    1 The grace period time in seconds. The default value is 30 seconds.

Overriding system deployment configurations

You can override the default configurations for some specific deployments by modifying the deployments spec in the KnativeServing and KnativeEventing custom resources (CRs).

Overriding Knative Serving system deployment configurations

You can override the default configurations for some specific deployments by modifying the deployments spec in the KnativeServing custom resource (CR). Currently, overriding default configuration settings is supported for the resources, replicas, labels, annotations, and nodeSelector fields, as well as for the readiness and liveness fields for probes.

In the following example, a KnativeServing CR overrides the webhook deployment so that:

  • The readiness probe timeout for net-kourier-controller is set to be 10 seconds.

  • The deployment has specified CPU and memory resource limits.

  • The deployment has 3 replicas.

  • The example-label: label label is added.

  • The example-annotation: annotation annotation is added.

  • The nodeSelector field is set to select nodes with the disktype: hdd label.

The KnativeServing CR label and annotation settings override the deployment’s labels and annotations for both the deployment itself and the resulting pods.

KnativeServing CR example
apiVersion: operator.knative.dev/v1beta1
kind: KnativeServing
metadata:
  name: ks
  namespace: knative-serving
spec:
  high-availability:
    replicas: 2
  deployments:
  - name: net-kourier-controller
    readinessProbes: (1)
      - container: controller
        timeoutSeconds: 10
  - name: webhook
    resources:
    - container: webhook
      requests:
        cpu: 300m
        memory: 60Mi
      limits:
        cpu: 1000m
        memory: 1000Mi
    replicas: 3
    labels:
      example-label: label
    annotations:
      example-annotation: annotation
    nodeSelector:
      disktype: hdd
1 You can use the readiness and liveness probe overrides to override all fields of a probe in a container of a deployment as specified in the Kubernetes API except for the fields related to the probe handler: exec, grpc, httpGet, and tcpSocket.

Overriding Knative Eventing system deployment configurations

You can override the default configurations for some specific deployments by modifying the deployments spec in the KnativeEventing custom resource (CR). Currently, overriding default configuration settings is supported for the eventing-controller, eventing-webhook, and imc-controller fields, as well as for the readiness and liveness fields for probes.

The replicas spec cannot override the number of replicas for deployments that use the Horizontal Pod Autoscaler (HPA), and does not work for the eventing-webhook deployment.

In the following example, a KnativeEventing CR overrides the eventing-controller deployment so that:

  • The readiness probe timeout eventing-controller is set to be 10 seconds.

  • The deployment has specified CPU and memory resource limits.

  • The deployment has 3 replicas.

  • The example-label: label label is added.

  • The example-annotation: annotation annotation is added.

  • The nodeSelector field is set to select nodes with the disktype: hdd label.

KnativeEventing CR example
apiVersion: operator.knative.dev/v1beta1
kind: KnativeEventing
metadata:
  name: knative-eventing
  namespace: knative-eventing
spec:
  deployments:
  - name: eventing-controller
    readinessProbes: (1)
      - container: controller
        timeoutSeconds: 10
    resources:
    - container: eventing-controller
      requests:
        cpu: 300m
        memory: 100Mi
      limits:
        cpu: 1000m
        memory: 250Mi
    replicas: 3
    labels:
      example-label: label
    annotations:
      example-annotation: annotation
    nodeSelector:
      disktype: hdd
1 You can use the readiness and liveness probe overrides to override all fields of a probe in a container of a deployment as specified in the Kubernetes API except for the fields related to the probe handler: exec, grpc, httpGet, and tcpSocket.

The KnativeEventing CR label and annotation settings override the deployment’s labels and annotations for both the deployment itself and the resulting pods.

Configuring the EmptyDir extension

emptyDir volumes are empty volumes that are created when a pod is created, and are used to provide temporary working disk space. emptyDir volumes are deleted when the pod they were created for is deleted.

The kubernetes.podspec-volumes-emptydir extension controls whether emptyDir volumes can be used with Knative Serving. To enable using emptyDir volumes, you must modify the KnativeServing custom resource (CR) to include the following YAML:

Example KnativeServing CR
apiVersion: operator.knative.dev/v1beta1
kind: KnativeServing
metadata:
  name: knative-serving
spec:
  config:
    features:
      kubernetes.podspec-volumes-emptydir: enabled
...

HTTPS redirection global settings

HTTPS redirection provides redirection for incoming HTTP requests. These redirected HTTP requests are encrypted. You can enable HTTPS redirection for all services on the cluster by configuring the httpProtocol spec for the KnativeServing custom resource (CR).

Example KnativeServing CR that enables HTTPS redirection
apiVersion: operator.knative.dev/v1beta1
kind: KnativeServing
metadata:
  name: knative-serving
spec:
  config:
    network:
      httpProtocol: "redirected"
...

Setting the URL scheme for external routes

The URL scheme of external routes defaults to HTTPS for enhanced security. This scheme is determined by the default-external-scheme key in the KnativeServing custom resource (CR) spec.

Default spec
...
spec:
  config:
    network:
      default-external-scheme: "https"
...

You can override the default spec to use HTTP by modifying the default-external-scheme key:

HTTP override spec
...
spec:
  config:
    network:
      default-external-scheme: "http"
...

Setting the Kourier Gateway service type

The Kourier Gateway is exposed by default as the ClusterIP service type. This service type is determined by the service-type ingress spec in the KnativeServing custom resource (CR).

Default spec
...
spec:
  ingress:
    kourier:
      service-type: ClusterIP
...

You can override the default service type to use a load balancer service type instead by modifying the service-type spec:

LoadBalancer override spec
...
spec:
  ingress:
    kourier:
      service-type: LoadBalancer
...

Enabling PVC support

Some serverless applications need permanent data storage. To achieve this, you can configure persistent volume claims (PVCs) for your Knative services.

Procedure
  1. To enable Knative Serving to use PVCs and write to them, modify the KnativeServing custom resource (CR) to include the following YAML:

    Enabling PVCs with write access
    ...
    spec:
      config:
        features:
          "kubernetes.podspec-persistent-volume-claim": enabled
          "kubernetes.podspec-persistent-volume-write": enabled
    ...
    • The kubernetes.podspec-persistent-volume-claim extension controls whether persistent volumes (PVs) can be used with Knative Serving.

    • The kubernetes.podspec-persistent-volume-write extension controls whether PVs are available to Knative Serving with the write access.

  2. To claim a PV, modify your service to include the PV configuration. For example, you might have a persistent volume claim with the following configuration:

    Use the storage class that supports the access mode that you are requesting. For example, you can use the ocs-storagecluster-cephfs class for the ReadWriteMany access mode.

    PersistentVolumeClaim configuration
    apiVersion: v1
    kind: PersistentVolumeClaim
    metadata:
      name: example-pv-claim
      namespace: my-ns
    spec:
      accessModes:
        - ReadWriteMany
      storageClassName: ocs-storagecluster-cephfs
      resources:
        requests:
          storage: 1Gi

    In this case, to claim a PV with write access, modify your service as follows:

    Knative service PVC configuration
    apiVersion: serving.knative.dev/v1
    kind: Service
    metadata:
      namespace: my-ns
    ...
    spec:
     template:
       spec:
         containers:
             ...
             volumeMounts: (1)
               - mountPath: /data
                 name: mydata
                 readOnly: false
         volumes:
           - name: mydata
             persistentVolumeClaim: (2)
               claimName: example-pv-claim
               readOnly: false (3)
    1 Volume mount specification.
    2 Persistent volume claim specification.
    3 Flag that enables read-only access.

    To successfully use persistent storage in Knative services, you need additional configuration, such as the user permissions for the Knative container user.

Enabling init containers

Init containers are specialized containers that are run before application containers in a pod. They are generally used to implement initialization logic for an application, which may include running setup scripts or downloading required configurations. You can enable the use of init containers for Knative services by modifying the KnativeServing custom resource (CR).

Init containers may cause longer application start-up times and should be used with caution for serverless applications, which are expected to scale up and down frequently.

Prerequisites
  • You have installed OpenShift Serverless Operator and Knative Serving on your cluster.

  • You have cluster administrator permissions.

Procedure
  • Enable the use of init containers by adding the kubernetes.podspec-init-containers flag to the KnativeServing CR:

    Example KnativeServing CR
    apiVersion: operator.knative.dev/v1beta1
    kind: KnativeServing
    metadata:
      name: knative-serving
    spec:
      config:
        features:
          kubernetes.podspec-init-containers: enabled
    ...

Tag-to-digest resolution

If the Knative Serving controller has access to the container registry, Knative Serving resolves image tags to a digest when you create a revision of a service. This is known as tag-to-digest resolution, and helps to provide consistency for deployments.

To give the controller access to the container registry on OpenShift Container Platform, you must create a secret and then configure controller custom certificates. You can configure controller custom certificates by modifying the controller-custom-certs spec in the KnativeServing custom resource (CR). The secret must reside in the same namespace as the KnativeServing CR.

If a secret is not included in the KnativeServing CR, this setting defaults to using public key infrastructure (PKI). When using PKI, the cluster-wide certificates are automatically injected into the Knative Serving controller by using the config-service-sa config map. The OpenShift Serverless Operator populates the config-service-sa config map with cluster-wide certificates and mounts the config map as a volume to the controller.

Configuring tag-to-digest resolution by using a secret

If the controller-custom-certs spec uses the secret type, the secret is mounted as a secret volume. Knative components consume the secret directly, assuming that the secret has the required certificates.

Prerequisites
  • You have cluster administrator permissions on OpenShift Container Platform.

  • You have installed the OpenShift Serverless Operator and Knative Serving on your cluster.

Procedure
  1. Create a secret:

    Example command
    $ oc -n knative-serving create secret generic custom-secret --from-file=<secret_name>.crt=<path_to_certificate>
  2. Configure the controller-custom-certs spec in the KnativeServing custom resource (CR) to use the secret type:

    Example KnativeServing CR
    apiVersion: operator.knative.dev/v1beta1
    kind: KnativeServing
    metadata:
      name: knative-serving
      namespace: knative-serving
    spec:
      controller-custom-certs:
        name: custom-secret
        type: secret