$ oc label ksvc <service_name> networking.knative.dev/visibility=cluster-local
By default, Knative services are published to a public IP address. Being published to a public IP address means that Knative services are public applications, and have a publicly accessible URL.
Publicly accessible URLs are accessible from outside of the cluster.
However, developers may need to build back-end services that are only be accessible from inside the cluster, known as private services.
Developers can label individual services in the cluster with the networking.knative.dev/visibility=cluster-local
label to make them private.
For OpenShift Serverless 1.15.0 and newer versions, the |
The OpenShift Serverless Operator and Knative Serving are installed on the cluster.
You have created a Knative service.
Set the visibility for your service by adding the networking.knative.dev/visibility=cluster-local
label:
$ oc label ksvc <service_name> networking.knative.dev/visibility=cluster-local
Check that the URL for your service is now in the format http://<service_name>.<namespace>.svc.cluster.local
, by entering the following command and reviewing the output:
$ oc get ksvc
NAME URL LATESTCREATED LATESTREADY READY REASON
hello http://hello.default.svc.cluster.local hello-tx2g7 hello-tx2g7 True
For cluster local services, the Kourier local gateway kourier-internal
is used. If you want to use TLS traffic against the Kourier local gateway, you must configure your own server certificates in the local gateway.
You have installed the OpenShift Serverless Operator and Knative Serving.
You have administrator permissions.
You have installed the OpenShift (oc
) CLI.
Deploy server certificates in the knative-serving-ingress
namespace:
$ export san="knative"
Subject Alternative Name (SAN) validation is required so that these certificates can serve the request to |
Generate a root key and certificate:
$ openssl req -x509 -sha256 -nodes -days 365 -newkey rsa:2048 \
-subj '/O=Example/CN=Example' \
-keyout ca.key \
-out ca.crt
Generate a server key that uses SAN validation:
$ openssl req -out tls.csr -newkey rsa:2048 -nodes -keyout tls.key \
-subj "/CN=Example/O=Example" \
-addext "subjectAltName = dns:$san"
Create server certificates:
$ openssl x509 -req -extfile <(printf "subjectAltName=dns:$san") \
-days 365 -in tls.csr \
-CA ca.crt -CAkey ca.key -CAcreateserial -out tls.crt
Configure a secret for the Kourier local gateway:
Deploy a secret in knative-serving-ingress
namespace from the certificates created by the previous steps:
$ oc create -n knative-serving-ingress secret tls server-certs \
--key=tls.key \
--cert=tls.crt --dry-run=client -o yaml | oc apply -f -
Update the KnativeServing
custom resource (CR) spec to use the secret that was created by the Kourier gateway:
...
spec:
config:
kourier:
cluster-cert-secret: server-certs
...
The Kourier controller sets the certificate without restarting the service, so that you do not need to restart the pod.
You can access the Kourier internal service with TLS through port 443
by mounting and using the ca.crt
from the client.