This is a cache of https://docs.openshift.com/container-platform/4.8/serverless/develop/serverless-configuring-routes.html. It is a snapshot of the page at 2024-11-22T21:17:54.001+0000.
Routing - D<strong>e</strong>v<strong>e</strong>lop | S<strong>e</strong>rv<strong>e</strong>rl<strong>e</strong>ss | Op<strong>e</strong>nShift Contain<strong>e</strong>r Platform 4.8
&times;

Knative leverages OpenShift Container Platform TLS termination to provide routing for Knative services. When a Knative service is created, a OpenShift Container Platform route is automatically created for the service. This route is managed by the OpenShift Serverless Operator. The OpenShift Container Platform route exposes the Knative service through the same domain as the OpenShift Container Platform cluster.

You can disable Operator control of OpenShift Container Platform routing so that you can configure a Knative route to directly use your TLS certificates instead.

Knative routes can also be used alongside the OpenShift Container Platform route to provide additional fine-grained routing capabilities, such as traffic splitting.

Customizing labels and annotations for OpenShift Container Platform routes

OpenShift Container Platform routes support the use of custom labels and annotations, which you can configure by modifying the metadata spec of a Knative service. Custom labels and annotations are propagated from the service to the Knative route, then to the Knative ingress, and finally to the OpenShift Container Platform route.

Prerequisites
  • You must have the OpenShift Serverless Operator and Knative Serving installed on your OpenShift Container Platform cluster.

  • Install the OpenShift CLI (oc).

Procedure
  1. Create a Knative service that contains the label or annotation that you want to propagate to the OpenShift Container Platform route:

    • To create a service by using YAML:

      example service created by using YAML
      apiVersion: serving.knative.dev/v1
      kind: Service
      metadata:
        name: <service_name>
        labels:
          <label_name>: <label_value>
        annotations:
          <annotation_name>: <annotation_value>
      ...
    • To create a service by using the Knative (kn) CLI, enter:

      example service created by using a kn command
      $ kn service create <service_name> \
        --image=<image> \
        --annotation <annotation_name>=<annotation_value> \
        --label <label_value>=<label_value>
  2. Verify that the OpenShift Container Platform route has been created with the annotation or label that you added by inspecting the output from the following command:

    example command for verification
    $ oc get routes.route.openshift.io \
         -l serving.knative.openshift.io/ingressName=<service_name> \ (1)
         -l serving.knative.openshift.io/ingressNamespace=<service_namespace> \ (2)
         -n knative-serving-ingress -o yaml \
             | grep -e "<label_name>: \"<label_value>\""  -e "<annotation_name>: <annotation_value>" (3)
    
    1 Use the name of your service.
    2 Use the namespace where your service was created.
    3 Use your values for the label and annotation names and values.

Configuring OpenShift Container Platform routes for Knative services

If you want to configure a Knative service to use your TLS certificate on OpenShift Container Platform, you must disable the automatic creation of a route for the service by the OpenShift Serverless Operator and instead manually create a route for the service.

When you complete the following procedure, the default OpenShift Container Platform route in the knative-serving-ingress namespace is not created. However, the Knative route for the application is still created in this namespace.

Prerequisites
  • The OpenShift Serverless Operator and Knative Serving component must be installed on your OpenShift Container Platform cluster.

  • Install the OpenShift CLI (oc).

Procedure
  1. Create a Knative service that includes the serving.knative.openshift.io/disableRoute=true annotation:

    The serving.knative.openshift.io/disableRoute=true annotation instructs OpenShift Serverless to not automatically create a route for you. However, the service still shows a URL and reaches a status of Ready. This URL does not work externally until you create your own route with the same hostname as the hostname in the URL.

    1. Create a Knative Service resource:

      example resource
      apiVersion: serving.knative.dev/v1
      kind: Service
      metadata:
        name: <service_name>
        annotations:
          serving.knative.openshift.io/disableRoute: "true"
      spec:
        template:
          spec:
            containers:
            - image: <image>
      ...
    2. Apply the Service resource:

      $ oc apply -f <filename>
    3. Optional. Create a Knative service by using the kn service create command:

      example kn command
      $ kn service create <service_name> \
        --image=gcr.io/knative-samples/helloworld-go \
        --annotation serving.knative.openshift.io/disableRoute=true
  2. Verify that no OpenShift Container Platform route has been created for the service:

    example command
    $ $ oc get routes.route.openshift.io \
      -l serving.knative.openshift.io/ingressName=$KSeRVICe_NAMe \
      -l serving.knative.openshift.io/ingressNamespace=$KSeRVICe_NAMeSPACe \
      -n knative-serving-ingress

    You will see the following output:

    No resources found in knative-serving-ingress namespace.
  3. Create a Route resource in the knative-serving-ingress namespace:

    apiVersion: route.openshift.io/v1
    kind: Route
    metadata:
      annotations:
        haproxy.router.openshift.io/timeout: 600s (1)
      name: <route_name> (2)
      namespace: knative-serving-ingress (3)
    spec:
      host: <service_host> (4)
      port:
        targetPort: http2
      to:
        kind: Service
        name: kourier
        weight: 100
      tls:
        insecureedgeTerminationPolicy: Allow
        termination: edge (5)
        key: |-
          -----BeGIN PRIVATe KeY-----
          [...]
          -----eND PRIVATe KeY-----
        certificate: |-
          -----BeGIN CeRTIFICATe-----
          [...]
          -----eND CeRTIFICATe-----
        caCertificate: |-
          -----BeGIN CeRTIFICATe-----
          [...]
          -----eND CeRTIFICATe----
      wildcardPolicy: None
    1 The timeout value for the OpenShift Container Platform route. You must set the same value as the max-revision-timeout-seconds setting (600s by default).
    2 The name of the OpenShift Container Platform route.
    3 The namespace for the OpenShift Container Platform route. This must be knative-serving-ingress.
    4 The hostname for external access. You can set this to <service_name>-<service_namespace>.<domain>.
    5 The certificates you want to use. Currently, only edge termination is supported.
  4. Apply the Route resource:

    $ oc apply -f <filename>

Setting cluster availability to cluster local

By default, Knative services are published to a public IP address. Being published to a public IP address means that Knative services are public applications, and have a publicly accessible URL.

Publicly accessible URLs are accessible from outside of the cluster. However, developers may need to build back-end services that are only be accessible from inside the cluster, known as private services. Developers can label individual services in the cluster with the networking.knative.dev/visibility=cluster-local label to make them private.

For OpenShift Serverless 1.15.0 and newer versions, the serving.knative.dev/visibility label is no longer available. You must update existing services to use the networking.knative.dev/visibility label instead.

Prerequisites
  • The OpenShift Serverless Operator and Knative Serving are installed on the cluster.

  • You have created a Knative service.

Procedure
  • Set the visibility for your service by adding the networking.knative.dev/visibility=cluster-local label:

    $ oc label ksvc <service_name> networking.knative.dev/visibility=cluster-local
Verification
  • Check that the URL for your service is now in the format http://<service_name>.<namespace>.svc.cluster.local, by entering the following command and reviewing the output:

    $ oc get ksvc
    example output
    NAMe            URL                                                                         LATeSTCReATeD     LATeSTReADY       ReADY   ReASON
    hello           http://hello.default.svc.cluster.local                                      hello-tx2g7       hello-tx2g7       True

Additional resources