This is a cache of https://docs.openshift.com/serverless/1.34/knative-serving/kourier-and-istio-ingresses.html. It is a snapshot of the page at 2024-11-22T16:33:11.386+0000.
Kourier and Istio <strong>ingress</strong>es | Serving | Red Hat OpenShift Serverless 1.34
×

OpenShift Serverless supports the following two ingress solutions:

  • Kourier

  • Istio using Red Hat OpenShift Service Mesh

The default is Kourier.

Kourier and Istio ingress solutions

Kourier

Kourier is the default ingress solution for OpenShift Serverless. It has the following properties:

  • It is based on envoy proxy.

  • It is simple and lightweight.

  • It provides the basic routing functionality that Serverless needs to provide its set of features.

  • It supports basic observability and metrics.

  • It supports basic TLS termination of Knative Service routing.

  • It provides only limited configuration and extension options.

Istio using OpenShift Service Mesh

Using Istio as the ingress solution for OpenShift Serverless enables an additional feature set that is based on what Red Hat OpenShift Service Mesh offers:

  • Native mTLS between all connections

  • Serverless components are part of a service mesh

  • Additional observability and metrics

  • Authorization and authentication support

  • Custom rules and configuration, as supported by Red Hat OpenShift Service Mesh

However, the additional features come with a higher overhead and resource consumption. For details, see the Red Hat OpenShift Service Mesh documentation.

See the "Integrating Service Mesh with OpenShift Serverless" section of Serverless documentation for Istio requirements and installation instructions.

Traffic configuration and routing

Regardless of whether you use Kourier or Istio, the traffic for a Knative Service is configured in the knative-serving namespace by the net-kourier-controller or the net-istio-controller respectively.

The controller reads the KnativeService and its child custom resources to configure the ingress solution. Both ingress solutions provide an ingress gateway pod that becomes part of the traffic path. Both ingress solutions are based on Envoy. By default, Serverless has two routes for each KnativeService object:

  • A cluster-external route that is forwarded by the OpenShift router, for example myapp-namespace.example.com.

  • A cluster-local route containing the cluster domain, for example myapp.namespace.svc.cluster.local. This domain can and should be used to call Knative services from Knative or other user workloads.

The ingress gateway can forward requests either in the serve mode or the proxy mode:

  • In the serve mode, requests go directly to the Queue-Proxy sidecar container of the Knative service.

  • In the proxy mode, requests first go through the Activator component in the knative-serving namespace.

The choice of mode depends on the configuration of Knative, the Knative service, and the current traffic. For example, if a Knative Service is scaled to zero, requests are sent to the Activator component, which acts as a buffer until a new Knative service pod is started.