This is a cache of https://docs.openshift.com/container-platform/4.6/serverless/admin_guide/serverless-kafka-admin.html. It is a snapshot of the page at 2024-11-26T00:10:54.564+0000.
Configuring Knative Kafka - Administer | Serverless | OpenShift Container Platform 4.6
×

Knative Kafka provides integration options for you to use supported versions of the Apache Kafka message streaming platform with OpenShift Serverless. Kafka provides options for event source, channel, broker, and event sink capabilities.

In addition to the Knative Eventing components that are provided as part of a core OpenShift Serverless installation, cluster administrators can install the KnativeKafka custom resource (CR).

Knative Kafka is not currently supported for IBM Z and IBM Power Systems.

The KnativeKafka CR provides users with additional options, such as:

  • Kafka source

  • Kafka channel

  • Kafka broker

  • Kafka sink

Installing Knative Kafka

Knative Kafka provides integration options for you to use supported versions of the Apache Kafka message streaming platform with OpenShift Serverless. Knative Kafka functionality is available in an OpenShift Serverless installation if you have installed the KnativeKafka custom resource.

Prerequisites
  • You have installed the OpenShift Serverless Operator and Knative Eventing on your cluster.

  • You have access to a Red Hat AMQ Streams cluster.

  • Install the OpenShift CLI (oc) if you want to use the verification steps.

  • You have cluster administrator permissions on OpenShift Container Platform.

  • You are logged in to the OpenShift Container Platform web console.

Procedure
  1. In the Administrator perspective, navigate to OperatorsInstalled Operators.

  2. Check that the Project dropdown at the top of the page is set to Project: knative-eventing.

  3. In the list of Provided APIs for the OpenShift Serverless Operator, find the Knative Kafka box and click Create Instance.

  4. Configure the KnativeKafka object in the Create Knative Kafka page.

    To use the Kafka channel, source, broker, or sink on your cluster, you must toggle the enabled switch for the options you want to use to true. These switches are set to false by default. Additionally, to use the Kafka channel, broker, or sink you must specify the bootstrap servers.

    Example KnativeKafka custom resource
    apiVersion: operator.serverless.openshift.io/v1alpha1
    kind: KnativeKafka
    metadata:
        name: knative-kafka
        namespace: knative-eventing
    spec:
        channel:
            enabled: true (1)
            bootstrapServers: <bootstrap_servers> (2)
        source:
            enabled: true (3)
        broker:
            enabled: true (4)
            defaultConfig:
                bootstrapServers: <bootstrap_servers> (5)
                numPartitions: <num_partitions> (6)
                replicationFactor: <replication_factor> (7)
        sink:
            enabled: true (8)
    1 Enables developers to use the KafkaChannel channel type in the cluster.
    2 A comma-separated list of bootstrap servers from your AMQ Streams cluster.
    3 Enables developers to use the KafkaSource event source type in the cluster.
    4 Enables developers to use the Knative Kafka broker implementation in the cluster.
    5 A comma-separated list of bootstrap servers from your Red Hat AMQ Streams cluster.
    6 Defines the number of partitions of the Kafka topics, backed by the Broker objects. The default is 10.
    7 Defines the replication factor of the Kafka topics, backed by the Broker objects. The default is 3.
    8 Enables developers to use a Kafka sink in the cluster.

    The replicationFactor value must be less than or equal to the number of nodes of your Red Hat AMQ Streams cluster.

    1. Using the form is recommended for simpler configurations that do not require full control of KnativeKafka object creation.

    2. Editing the YAML is recommended for more complex configurations that require full control of KnativeKafka object creation. You can access the YAML by clicking the Edit YAML link in the top right of the Create Knative Kafka page.

  5. Click Create after you have completed any of the optional configurations for Kafka. You are automatically directed to the Knative Kafka tab where knative-kafka is in the list of resources.

Verification
  1. Click on the knative-kafka resource in the Knative Kafka tab. You are automatically directed to the Knative Kafka Overview page.

  2. View the list of Conditions for the resource and confirm that they have a status of True.

    Kafka Knative Overview page showing Conditions

    If the conditions have a status of Unknown or False, wait a few moments to refresh the page.

  3. Check that the Knative Kafka resources have been created:

    $ oc get pods -n knative-eventing
    Example output
    NAME                                        READY   STATUS    RESTARTS   AGE
    kafka-broker-dispatcher-7769fbbcbb-xgffn    2/2     Running   0          44s
    kafka-broker-receiver-5fb56f7656-fhq8d      2/2     Running   0          44s
    kafka-channel-dispatcher-84fd6cb7f9-k2tjv   2/2     Running   0          44s
    kafka-channel-receiver-9b7f795d5-c76xr      2/2     Running   0          44s
    kafka-controller-6f95659bf6-trd6r           2/2     Running   0          44s
    kafka-source-dispatcher-6bf98bdfff-8bcsn    2/2     Running   0          44s
    kafka-webhook-eventing-68dc95d54b-825xs     2/2     Running   0          44s

Security configuration for Knative Kafka

Kafka clusters are generally secured by using the TLS or SASL authentication methods. You can configure a Kafka broker or channel to work against a protected Red Hat AMQ Streams cluster by using TLS or SASL.

Red Hat recommends that you enable both SASL and TLS together.

Configuring TLS authentication for Kafka brokers

Transport Layer Security (TLS) is used by Apache Kafka clients and servers to encrypt traffic between Knative and Kafka, as well as for authentication. TLS is the only supported method of traffic encryption for Knative Kafka.

Prerequisites
  • You have cluster administrator permissions on OpenShift Container Platform.

  • The OpenShift Serverless Operator, Knative Eventing, and the KnativeKafka CR are installed on your OpenShift Container Platform cluster.

  • You have created a project or have access to a project with the appropriate roles and permissions to create applications and other workloads in OpenShift Container Platform.

  • You have a Kafka cluster CA certificate stored as a .pem file.

  • You have a Kafka cluster client certificate and a key stored as .pem files.

  • Install the OpenShift CLI (oc).

Procedure
  1. Create the certificate files as a secret in the knative-eventing namespace:

    $ oc create secret -n knative-eventing generic <secret_name> \
      --from-literal=protocol=SSL \
      --from-file=ca.crt=caroot.pem \
      --from-file=user.crt=certificate.pem \
      --from-file=user.key=key.pem

    Use the key names ca.crt, user.crt, and user.key. Do not change them.

  2. Edit the KnativeKafka CR and add a reference to your secret in the broker spec:

    apiVersion: operator.serverless.openshift.io/v1alpha1
    kind: KnativeKafka
    metadata:
      namespace: knative-eventing
      name: knative-kafka
    spec:
      broker:
        enabled: true
        defaultConfig:
          authSecretName: <secret_name>
    ...

Configuring SASL authentication for Kafka brokers

Simple Authentication and Security Layer (SASL) is used by Apache Kafka for authentication. If you use SASL authentication on your cluster, users must provide credentials to Knative for communicating with the Kafka cluster; otherwise events cannot be produced or consumed.

Prerequisites
  • You have cluster administrator permissions on OpenShift Container Platform.

  • The OpenShift Serverless Operator, Knative Eventing, and the KnativeKafka CR are installed on your OpenShift Container Platform cluster.

  • You have created a project or have access to a project with the appropriate roles and permissions to create applications and other workloads in OpenShift Container Platform.

  • You have a username and password for a Kafka cluster.

  • You have chosen the SASL mechanism to use, for example, PLAIN, SCRAM-SHA-256, or SCRAM-SHA-512.

  • If TLS is enabled, you also need the ca.crt certificate file for the Kafka cluster.

  • Install the OpenShift CLI (oc).

Procedure
  1. Create the certificate files as a secret in the knative-eventing namespace:

    $ oc create secret -n knative-eventing generic <secret_name> \
      --from-literal=protocol=SASL_SSL \
      --from-literal=sasl.mechanism=<sasl_mechanism> \
      --from-file=ca.crt=caroot.pem \
      --from-literal=password="SecretPassword" \
      --from-literal=user="my-sasl-user"
    • Use the key names ca.crt, password, and sasl.mechanism. Do not change them.

    • If you want to use SASL with public CA certificates, you must use the tls.enabled=true flag, rather than the ca.crt argument, when creating the secret. For example:

      $ oc create secret -n <namespace> generic <kafka_auth_secret> \
        --from-literal=tls.enabled=true \
        --from-literal=password="SecretPassword" \
        --from-literal=saslType="SCRAM-SHA-512" \
        --from-literal=user="my-sasl-user"
  2. Edit the KnativeKafka CR and add a reference to your secret in the broker spec:

    apiVersion: operator.serverless.openshift.io/v1alpha1
    kind: KnativeKafka
    metadata:
      namespace: knative-eventing
      name: knative-kafka
    spec:
      broker:
        enabled: true
        defaultConfig:
          authSecretName: <secret_name>
    ...

Configuring TLS authentication for Kafka channels

Transport Layer Security (TLS) is used by Apache Kafka clients and servers to encrypt traffic between Knative and Kafka, as well as for authentication. TLS is the only supported method of traffic encryption for Knative Kafka.

Prerequisites
  • You have cluster administrator permissions on OpenShift Container Platform.

  • The OpenShift Serverless Operator, Knative Eventing, and the KnativeKafka CR are installed on your OpenShift Container Platform cluster.

  • You have created a project or have access to a project with the appropriate roles and permissions to create applications and other workloads in OpenShift Container Platform.

  • You have a Kafka cluster CA certificate stored as a .pem file.

  • You have a Kafka cluster client certificate and a key stored as .pem files.

  • Install the OpenShift CLI (oc).

Procedure
  1. Create the certificate files as secrets in your chosen namespace:

    $ oc create secret -n <namespace> generic <kafka_auth_secret> \
      --from-file=ca.crt=caroot.pem \
      --from-file=user.crt=certificate.pem \
      --from-file=user.key=key.pem

    Use the key names ca.crt, user.crt, and user.key. Do not change them.

  2. Start editing the KnativeKafka custom resource:

    $ oc edit knativekafka
  3. Reference your secret and the namespace of the secret:

    apiVersion: operator.serverless.openshift.io/v1alpha1
    kind: KnativeKafka
    metadata:
      namespace: knative-eventing
      name: knative-kafka
    spec:
      channel:
        authSecretName: <kafka_auth_secret>
        authSecretNamespace: <kafka_auth_secret_namespace>
        bootstrapServers: <bootstrap_servers>
        enabled: true
      source:
        enabled: true

    Make sure to specify the matching port in the bootstrap server.

    For example:

    apiVersion: operator.serverless.openshift.io/v1alpha1
    kind: KnativeKafka
    metadata:
      namespace: knative-eventing
      name: knative-kafka
    spec:
      channel:
        authSecretName: tls-user
        authSecretNamespace: kafka
        bootstrapServers: eventing-kafka-bootstrap.kafka.svc:9094
        enabled: true
      source:
        enabled: true

Configuring SASL authentication for Kafka channels

Simple Authentication and Security Layer (SASL) is used by Apache Kafka for authentication. If you use SASL authentication on your cluster, users must provide credentials to Knative for communicating with the Kafka cluster; otherwise events cannot be produced or consumed.

Prerequisites
  • You have cluster administrator permissions on OpenShift Container Platform.

  • The OpenShift Serverless Operator, Knative Eventing, and the KnativeKafka CR are installed on your OpenShift Container Platform cluster.

  • You have created a project or have access to a project with the appropriate roles and permissions to create applications and other workloads in OpenShift Container Platform.

  • You have a username and password for a Kafka cluster.

  • You have chosen the SASL mechanism to use, for example, PLAIN, SCRAM-SHA-256, or SCRAM-SHA-512.

  • If TLS is enabled, you also need the ca.crt certificate file for the Kafka cluster.

  • Install the OpenShift CLI (oc).

Procedure
  1. Create the certificate files as secrets in your chosen namespace:

    $ oc create secret -n <namespace> generic <kafka_auth_secret> \
      --from-file=ca.crt=caroot.pem \
      --from-literal=password="SecretPassword" \
      --from-literal=saslType="SCRAM-SHA-512" \
      --from-literal=user="my-sasl-user"
    • Use the key names ca.crt, password, and sasl.mechanism. Do not change them.

    • If you want to use SASL with public CA certificates, you must use the tls.enabled=true flag, rather than the ca.crt argument, when creating the secret. For example:

      $ oc create secret -n <namespace> generic <kafka_auth_secret> \
        --from-literal=tls.enabled=true \
        --from-literal=password="SecretPassword" \
        --from-literal=saslType="SCRAM-SHA-512" \
        --from-literal=user="my-sasl-user"
  2. Start editing the KnativeKafka custom resource:

    $ oc edit knativekafka
  3. Reference your secret and the namespace of the secret:

    apiVersion: operator.serverless.openshift.io/v1alpha1
    kind: KnativeKafka
    metadata:
      namespace: knative-eventing
      name: knative-kafka
    spec:
      channel:
        authSecretName: <kafka_auth_secret>
        authSecretNamespace: <kafka_auth_secret_namespace>
        bootstrapServers: <bootstrap_servers>
        enabled: true
      source:
        enabled: true

    Make sure to specify the matching port in the bootstrap server.

    For example:

    apiVersion: operator.serverless.openshift.io/v1alpha1
    kind: KnativeKafka
    metadata:
      namespace: knative-eventing
      name: knative-kafka
    spec:
      channel:
        authSecretName: scram-user
        authSecretNamespace: kafka
        bootstrapServers: eventing-kafka-bootstrap.kafka.svc:9093
        enabled: true
      source:
        enabled: true

Configuring SASL authentication for Kafka sources

Simple Authentication and Security Layer (SASL) is used by Apache Kafka for authentication. If you use SASL authentication on your cluster, users must provide credentials to Knative for communicating with the Kafka cluster; otherwise events cannot be produced or consumed.

Prerequisites
  • You have cluster or dedicated administrator permissions on OpenShift Container Platform.

  • The OpenShift Serverless Operator, Knative Eventing, and the KnativeKafka CR are installed on your OpenShift Container Platform cluster.

  • You have created a project or have access to a project with the appropriate roles and permissions to create applications and other workloads in OpenShift Container Platform.

  • You have a username and password for a Kafka cluster.

  • You have chosen the SASL mechanism to use, for example, PLAIN, SCRAM-SHA-256, or SCRAM-SHA-512.

  • If TLS is enabled, you also need the ca.crt certificate file for the Kafka cluster.

  • You have installed the OpenShift (oc) CLI.

Procedure
  1. Create the certificate files as secrets in your chosen namespace:

    $ oc create secret -n <namespace> generic <kafka_auth_secret> \
      --from-file=ca.crt=caroot.pem \
      --from-literal=password="SecretPassword" \
      --from-literal=saslType="SCRAM-SHA-512" \ (1)
      --from-literal=user="my-sasl-user"
    1 The SASL type can be PLAIN, SCRAM-SHA-256, or SCRAM-SHA-512.
  2. Create or modify your Kafka source so that it contains the following spec configuration:

    apiVersion: sources.knative.dev/v1beta1
    kind: KafkaSource
    metadata:
      name: example-source
    spec:
    ...
      net:
        sasl:
          enable: true
          user:
            secretKeyRef:
              name: <kafka_auth_secret>
              key: user
          password:
            secretKeyRef:
              name: <kafka_auth_secret>
              key: password
          type:
            secretKeyRef:
              name: <kafka_auth_secret>
              key: saslType
        tls:
          enable: true
          caCert: (1)
            secretKeyRef:
              name: <kafka_auth_secret>
              key: ca.crt
    ...
    1 The caCert spec is not required if you are using a public cloud Kafka service, such as Red Hat OpenShift Streams for Apache Kafka.

Configuring security for Kafka sinks

Transport Layer Security (TLS) is used by Apache Kafka clients and servers to encrypt traffic between Knative and Kafka, as well as for authentication. TLS is the only supported method of traffic encryption for Knative Kafka.

Simple Authentication and Security Layer (SASL) is used by Apache Kafka for authentication. If you use SASL authentication on your cluster, users must provide credentials to Knative for communicating with the Kafka cluster; otherwise events cannot be produced or consumed.

Prerequisites
  • The OpenShift Serverless Operator, Knative Eventing, and the KnativeKafka custom resources (CRs) are installed on your OpenShift Container Platform cluster.

  • Kafka sink is enabled in the KnativeKafka CR.

  • You have created a project or have access to a project with the appropriate roles and permissions to create applications and other workloads in OpenShift Container Platform.

  • You have a Kafka cluster CA certificate stored as a .pem file.

  • You have a Kafka cluster client certificate and a key stored as .pem files.

  • You have installed the OpenShift (oc) CLI.

  • You have chosen the SASL mechanism to use, for example, PLAIN, SCRAM-SHA-256, or SCRAM-SHA-512.

Procedure
  1. Create the certificate files as a secret in the same namespace as your KafkaSink object:

    Certificates and keys must be in PEM format.

    • For authentication using SASL without encryption:

      $ oc create secret -n <namespace> generic <secret_name> \
        --from-literal=protocol=SASL_PLAINTEXT \
        --from-literal=sasl.mechanism=<sasl_mechanism> \
        --from-literal=user=<username> \
        --from-literal=password=<password>
    • For authentication using SASL and encryption using TLS:

      $ oc create secret -n <namespace> generic <secret_name> \
        --from-literal=protocol=SASL_SSL \
        --from-literal=sasl.mechanism=<sasl_mechanism> \
        --from-file=ca.crt=<my_caroot.pem_file_path> \ (1)
        --from-literal=user=<username> \
        --from-literal=password=<password>
      1 The ca.crt can be omitted to use the system’s root CA set if you are using a public cloud managed Kafka service, such as Red Hat OpenShift Streams for Apache Kafka.
    • For authentication and encryption using TLS:

      $ oc create secret -n <namespace> generic <secret_name> \
        --from-literal=protocol=SSL \
        --from-file=ca.crt=<my_caroot.pem_file_path> \ (1)
        --from-file=user.crt=<my_cert.pem_file_path> \
        --from-file=user.key=<my_key.pem_file_path>
      1 The ca.crt can be omitted to use the system’s root CA set if you are using a public cloud managed Kafka service, such as Red Hat OpenShift Streams for Apache Kafka.
  2. Create or modify a KafkaSink object and add a reference to your secret in the auth spec:

    apiVersion: eventing.knative.dev/v1alpha1
    kind: KafkaSink
    metadata:
       name: <sink_name>
       namespace: <namespace>
    spec:
    ...
       auth:
         secret:
           ref:
             name: <secret_name>
    ...
  3. Apply the KafkaSink object:

    $ oc apply -f <filename>

Configuring Kafka broker settings

You can configure the replication factor, bootstrap servers, and the number of topic partitions for a Kafka broker, by creating a config map and referencing this config map in the Kafka Broker object.

Prerequisites
  • You have cluster or dedicated administrator permissions on OpenShift Container Platform.

  • The OpenShift Serverless Operator, Knative Eventing, and the KnativeKafka custom resource (CR) are installed on your OpenShift Container Platform cluster.

  • You have created a project or have access to a project that has the appropriate roles and permissions to create applications and other workloads in OpenShift Container Platform.

  • You have installed the OpenShift CLI (oc).

Procedure
  1. Modify the kafka-broker-config config map, or create your own config map that contains the following configuration:

    apiVersion: v1
    kind: configmap
    metadata:
      name: <config_map_name> (1)
      namespace: <namespace> (2)
    data:
      default.topic.partitions: <integer> (3)
      default.topic.replication.factor: <integer> (4)
      bootstrap.servers: <list_of_servers> (5)
    1 The config map name.
    2 The namespace where the config map exists.
    3 The number of topic partitions for the Kafka broker. This controls how quickly events can be sent to the broker. A higher number of partitions requires greater compute resources.
    4 The replication factor of topic messages. This prevents against data loss. A higher replication factor requires greater compute resources and more storage.
    5 A comma separated list of bootstrap servers. This can be inside or outside of the OpenShift Container Platform cluster, and is a list of Kafka clusters that the broker receives events from and sends events to.

    The default.topic.replication.factor value must be less than or equal to the number of Kafka broker instances in your cluster. For example, if you only have one Kafka broker, the default.topic.replication.factor value should not be more than "1".

    Example Kafka broker config map
    apiVersion: v1
    kind: configmap
    metadata:
      name: kafka-broker-config
      namespace: knative-eventing
    data:
      default.topic.partitions: "10"
      default.topic.replication.factor: "3"
      bootstrap.servers: "my-cluster-kafka-bootstrap.kafka:9092"
  2. Apply the config map:

    $ oc apply -f <config_map_filename>
  3. Specify the config map for the Kafka Broker object:

    Example Broker object
    apiVersion: eventing.knative.dev/v1
    kind: Broker
    metadata:
      name: <broker_name> (1)
      namespace: <namespace> (2)
      annotations:
        eventing.knative.dev/broker.class: Kafka (3)
    spec:
      config:
        apiVersion: v1
        kind: configmap
        name: <config_map_name> (4)
        namespace: <namespace> (5)
    ...
    1 The broker name.
    2 The namespace where the broker exists.
    3 The broker class annotation. In this example, the broker is a Kafka broker that uses the class value Kafka.
    4 The config map name.
    5 The namespace where the config map exists.
  4. Apply the broker:

    $ oc apply -f <broker_filename>
Additional resources