This is a cache of https://docs.openshift.com/container-platform/4.2/serverless/getting-started-knative-services.html. It is a snapshot of the page at 2024-11-23T02:41:35.324+0000.
Getting started with Knative services | Serverless applications | OpenShift Container Platform 4.2
×

You are viewing documentation for a release of Red Hat OpenShift Serverless that is no longer supported. Red Hat OpenShift Serverless is currently supported on OpenShift Container Platform 4.3 and newer.

Knative services are Kubernetes services that a user creates to deploy a serverless application. Each Knative service is defined by a route and a configuration, contained in a .yaml file.

Creating a Knative service

To create a service, you must create the service.yaml file.

You can copy the sample below. This sample will create a sample golang application called helloworld-go and allows you to specify the image for that application.

apiVersion: serving.knative.dev/v1alpha1 (1)
kind: Service
metadata:
  name: helloworld-go (2)
  namespace: default (3)
spec:
  template:
    spec:
      containers:
        - image: gcr.io/knative-samples/helloworld-go (4)
          env:
            - name: TARGET (5)
              value: "Go Sample v1"
1 Current version of Knative
2 The name of the application
3 The namespace the application will use
4 The URL to the image of the application
5 The environment variable printed out by the sample application

Deploying a serverless application

To deploy a serverless application, you must apply the service.yaml file.

Procedure
  1. Navigate to the directory where the service.yaml file is contained.

  2. Deploy the application by applying the service.yaml file.

    $ oc apply --filename service.yaml

Now that service has been created and the application has been deployed, Knative will create a new immutable revision for this version of the application.

Knative will also perform network programming to create a route, ingress, service, and load balancer for your application, and will automatically scale your pods up and down based on traffic, including inactive pods.

The first time that a Knative service is created in a namespace, that namespace will automatically receive a new networking configuration. This might cause the initial service to take longer than is usually required for a service to become ready.

If the namespace has no existing NetworkPolicy configuration, an "allow all" type policy will be applied automatically. This policy will be removed automatically if all Knative Services are removed from that namespace and no other NetworkPolicy configurations have been applied.

Connecting Knative Services to existing Kubernetes deployments

Knative Services can call a Kubernetes deployment in any namespace, provided that there are no existing additional network barriers.

A Kubernetes deployment can call a Knative Service if:

  • The Kubernetes deployment is in the same namespace as the target Knative Service.

  • The Kubernetes deployment is in a namespace that was manually added to the ServiceMeshMemberRoll in knative-serving-ingress.

  • The Kubernetes deployment uses the target Knative Service’s public URL.

    Knative Services are accessed using a public URL by default. The target Knative Service must not be configured as a private, cluster-local visibility service if you want to connect it to your existing Kubernetes deploying using a public URL.