This is a cache of https://docs.openshift.com/container-platform/4.11/post_installation_configuration/network-configuration.html. It is a snapshot of the page at 2024-11-25T14:00:10.638+0000.
Network configuration | Post-installation configuration | OpenShift Container Platform 4.11
×

Cluster Network Operator configuration

The configuration for the cluster network is specified as part of the Cluster Network Operator (CNO) configuration and stored in a custom resource (CR) object that is named cluster. The CR specifies the fields for the Network API in the operator.openshift.io API group.

The CNO configuration inherits the following fields during cluster installation from the Network API in the Network.config.openshift.io API group and these fields cannot be changed:

clusterNetwork

IP address pools from which pod IP addresses are allocated.

serviceNetwork

IP address pool for services.

defaultNetwork.type

Cluster network provider, such as OpenShift SDN or OVN-Kubernetes.

After cluster installation, you cannot modify the fields listed in the previous section.

Enabling the cluster-wide proxy

The Proxy object is used to manage the cluster-wide egress proxy. When a cluster is installed or upgraded without the proxy configured, a Proxy object is still generated but it will have a nil spec. For example:

apiVersion: config.openshift.io/v1
kind: Proxy
metadata:
  name: cluster
spec:
  trustedCA:
    name: ""
status:

A cluster administrator can configure the proxy for OpenShift Container Platform by modifying this cluster Proxy object.

Only the Proxy object named cluster is supported, and no additional proxies can be created.

Prerequisites
  • Cluster administrator permissions

  • OpenShift Container Platform oc CLI tool installed

Procedure
  1. Create a config map that contains any additional CA certificates required for proxying HTTPS connections.

    You can skip this step if the proxy’s identity certificate is signed by an authority from the RHCOS trust bundle.

    1. Create a file called user-ca-bundle.yaml with the following contents, and provide the values of your PEM-encoded certificates:

      apiVersion: v1
      data:
        ca-bundle.crt: | (1)
          <MY_PEM_ENCODED_CERTS> (2)
      kind: ConfigMap
      metadata:
        name: user-ca-bundle (3)
        namespace: openshift-config (4)
      1 This data key must be named ca-bundle.crt.
      2 One or more PEM-encoded X.509 certificates used to sign the proxy’s identity certificate.
      3 The config map name that will be referenced from the Proxy object.
      4 The config map must be in the openshift-config namespace.
    2. Create the config map from this file:

      $ oc create -f user-ca-bundle.yaml
  2. Use the oc edit command to modify the Proxy object:

    $ oc edit proxy/cluster
  3. Configure the necessary fields for the proxy:

    apiVersion: config.openshift.io/v1
    kind: Proxy
    metadata:
      name: cluster
    spec:
      httpProxy: http://<username>:<pswd>@<ip>:<port> (1)
      httpsProxy: https://<username>:<pswd>@<ip>:<port> (2)
      noProxy: example.com (3)
      readinessEndpoints:
      - http://www.google.com (4)
      - https://www.google.com
      trustedCA:
        name: user-ca-bundle (5)
    1 A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be http.
    2 A proxy URL to use for creating HTTPS connections outside the cluster. The URL scheme must be either http or https. Specify a URL for the proxy that supports the URL scheme. For example, most proxies will report an error if they are configured to use https but they only support http. This failure message may not propagate to the logs and can appear to be a network connection failure instead. If using a proxy that listens for https connections from the cluster, you may need to configure the cluster to accept the CAs and certificates that the proxy uses.
    3 A comma-separated list of destination domain names, domains, IP addresses or other network CIDRs to exclude proxying.

    Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com, but not y.com. Use * to bypass proxy for all destinations. If you scale up workers that are not included in the network defined by the networking.machineNetwork[].cidr field from the installation configuration, you must add them to this list to prevent connection issues.

    This field is ignored if neither the httpProxy or httpsProxy fields are set.

    4 One or more URLs external to the cluster to use to perform a readiness check before writing the httpProxy and httpsProxy values to status.
    5 A reference to the config map in the openshift-config namespace that contains additional CA certificates required for proxying HTTPS connections. Note that the config map must already exist before referencing it here. This field is required unless the proxy’s identity certificate is signed by an authority from the RHCOS trust bundle.
  4. Save the file to apply the changes.

Setting DNS to private

After you deploy a cluster, you can modify its DNS to use only a private zone.

Procedure
  1. Review the DNS custom resource for your cluster:

    $ oc get dnses.config.openshift.io/cluster -o yaml
    Example output
    apiVersion: config.openshift.io/v1
    kind: DNS
    metadata:
      creationTimestamp: "2019-10-25T18:27:09Z"
      generation: 2
      name: cluster
      resourceVersion: "37966"
      selfLink: /apis/config.openshift.io/v1/dnses/cluster
      uid: 0e714746-f755-11f9-9cb1-02ff55d8f976
    spec:
      baseDomain: <base_domain>
      privateZone:
        tags:
          Name: <infrastructure_id>-int
          kubernetes.io/cluster/<infrastructure_id>: owned
      publicZone:
        id: Z2XXXXXXXXXXA4
    status: {}

    Note that the spec section contains both a private and a public zone.

  2. Patch the DNS custom resource to remove the public zone:

    $ oc patch dnses.config.openshift.io/cluster --type=merge --patch='{"spec": {"publicZone": null}}'
    dns.config.openshift.io/cluster patched

    Because the Ingress Controller consults the DNS definition when it creates Ingress objects, when you create or modify Ingress objects, only private records are created.

    DNS records for the existing Ingress objects are not modified when you remove the public zone.

  3. Optional: Review the DNS custom resource for your cluster and confirm that the public zone was removed:

    $ oc get dnses.config.openshift.io/cluster -o yaml
    Example output
    apiVersion: config.openshift.io/v1
    kind: DNS
    metadata:
      creationTimestamp: "2019-10-25T18:27:09Z"
      generation: 2
      name: cluster
      resourceVersion: "37966"
      selfLink: /apis/config.openshift.io/v1/dnses/cluster
      uid: 0e714746-f755-11f9-9cb1-02ff55d8f976
    spec:
      baseDomain: <base_domain>
      privateZone:
        tags:
          Name: <infrastructure_id>-int
          kubernetes.io/cluster/<infrastructure_id>-wfpg4: owned
    status: {}

Configuring ingress cluster traffic

OpenShift Container Platform provides the following methods for communicating from outside the cluster with services running in the cluster:

  • If you have HTTP/HTTPS, use an Ingress Controller.

  • If you have a TLS-encrypted protocol other than HTTPS, such as TLS with the SNI header, use an Ingress Controller.

  • Otherwise, use a load balancer, an external IP, or a node port.

Method Purpose

Use an Ingress Controller

Allows access to HTTP/HTTPS traffic and TLS-encrypted protocols other than HTTPS, such as TLS with the SNI header.

Automatically assign an external IP by using a load balancer service

Allows traffic to non-standard ports through an IP address assigned from a pool.

Manually assign an external IP to a service

Allows traffic to non-standard ports through a specific IP address.

Configure a NodePort

Expose a service on all nodes in the cluster.

Configuring the node port service range

As a cluster administrator, you can expand the available node port range. If your cluster uses of a large number of node ports, you might need to increase the number of available ports.

The default port range is 30000-32767. You can never reduce the port range, even if you first expand it beyond the default range.

Prerequisites

  • Your cluster infrastructure must allow access to the ports that you specify within the expanded range. For example, if you expand the node port range to 30000-32900, the inclusive port range of 32768-32900 must be allowed by your firewall or packet filtering configuration.

Expanding the node port range

You can expand the node port range for the cluster.

Prerequisites
  • Install the OpenShift CLI (oc).

  • Log in to the cluster with a user with cluster-admin privileges.

Procedure
  1. To expand the node port range, enter the following command. Replace <port> with the largest port number in the new range.

    $ oc patch network.config.openshift.io cluster --type=merge -p \
      '{
        "spec":
          { "serviceNodePortRange": "30000-<port>" }
      }'

    You can alternatively apply the following YAML to update the node port range:

    apiVersion: config.openshift.io/v1
    kind: Network
    metadata:
      name: cluster
    spec:
      serviceNodePortRange: "30000-<port>"
    Example output
    network.config.openshift.io/cluster patched
  2. To confirm that the configuration is active, enter the following command. It can take several minutes for the update to apply.

    $ oc get configmaps -n openshift-kube-apiserver config \
      -o jsonpath="{.data['config\.yaml']}" | \
      grep -Eo '"service-node-port-range":["[[:digit:]]+-[[:digit:]]+"]'
    Example output
    "service-node-port-range":["30000-33000"]

Configuring IPsec encryption

With IPsec enabled, all network traffic between nodes on the OVN-Kubernetes Container Network Interface (CNI) cluster network travels through an encrypted tunnel.

IPsec is disabled by default.

Prerequisites

  • Your cluster must use the OVN-Kubernetes cluster network provider.

Enabling IPsec encryption

As a cluster administrator, you can enable IPsec encryption after cluster installation.

Prerequisites
  • Install the OpenShift CLI (oc).

  • Log in to the cluster with a user with cluster-admin privileges.

  • You have reduced the size of your cluster MTU by 46 bytes to allow for the overhead of the IPsec ESP header.

Procedure
  • To enable IPsec encryption, enter the following command:

    $ oc patch networks.operator.openshift.io cluster --type=merge \
    -p '{"spec":{"defaultNetwork":{"ovnKubernetesConfig":{"ipsecConfig":{ }}}}}'

Verifying that IPsec is enabled

As a cluster administrator, you can verify that IPsec is enabled.

Verification
  1. To find the names of the OVN-Kubernetes control plane pods, enter the following command:

    $ oc get pods -n openshift-ovn-kubernetes | grep ovnkube-master
    Example output
    ovnkube-master-4496s        1/1     Running   0          6h39m
    ovnkube-master-d6cht        1/1     Running   0          6h42m
    ovnkube-master-skblc        1/1     Running   0          6h51m
    ovnkube-master-vf8rf        1/1     Running   0          6h51m
    ovnkube-master-w7hjr        1/1     Running   0          6h51m
    ovnkube-master-zsk7x        1/1     Running   0          6h42m
  2. Verify that IPsec is enabled on your cluster:

    $ oc -n openshift-ovn-kubernetes -c nbdb rsh ovnkube-master-<XXXXX> \
      ovn-nbctl --no-leader-only get nb_global . ipsec

    where:

    <XXXXX>

    Specifies the random sequence of letters for a pod from the previous step.

    Example output
    true

Configuring network policy

As a cluster administrator or project administrator, you can configure network policies for a project.

About network policy

In a cluster using a Kubernetes Container Network Interface (CNI) plugin that supports Kubernetes network policy, network isolation is controlled entirely by NetworkPolicy objects. In OpenShift Container Platform 4.11, OpenShift SDN supports using network policy in its default network isolation mode.

Network policy does not apply to the host network namespace. Pods with host networking enabled are unaffected by network policy rules. However, pods connecting to the host-networked pods might be affected by the network policy rules.

Network policies cannot block traffic from localhost or from their resident nodes.

By default, all pods in a project are accessible from other pods and network endpoints. To isolate one or more pods in a project, you can create NetworkPolicy objects in that project to indicate the allowed incoming connections. Project administrators can create and delete NetworkPolicy objects within their own project.

If a pod is matched by selectors in one or more NetworkPolicy objects, then the pod will accept only connections that are allowed by at least one of those NetworkPolicy objects. A pod that is not selected by any NetworkPolicy objects is fully accessible.

A network policy applies to only the TCP, UDP, ICMP, and SCTP protocols. Other protocols are not affected.

The following example NetworkPolicy objects demonstrate supporting different scenarios:

  • Deny all traffic:

    To make a project deny by default, add a NetworkPolicy object that matches all pods but accepts no traffic:

    kind: NetworkPolicy
    apiVersion: networking.k8s.io/v1
    metadata:
      name: deny-by-default
    spec:
      podSelector: {}
      ingress: []
  • Only allow connections from the OpenShift Container Platform Ingress Controller:

    To make a project allow only connections from the OpenShift Container Platform Ingress Controller, add the following NetworkPolicy object.

    apiVersion: networking.k8s.io/v1
    kind: NetworkPolicy
    metadata:
      name: allow-from-openshift-ingress
    spec:
      ingress:
      - from:
        - namespaceSelector:
            matchLabels:
              network.openshift.io/policy-group: ingress
      podSelector: {}
      policyTypes:
      - Ingress
  • Only accept connections from pods within a project:

    To make pods accept connections from other pods in the same project, but reject all other connections from pods in other projects, add the following NetworkPolicy object:

    kind: NetworkPolicy
    apiVersion: networking.k8s.io/v1
    metadata:
      name: allow-same-namespace
    spec:
      podSelector: {}
      ingress:
      - from:
        - podSelector: {}
  • Only allow HTTP and HTTPS traffic based on pod labels:

    To enable only HTTP and HTTPS access to the pods with a specific label (role=frontend in following example), add a NetworkPolicy object similar to the following:

    kind: NetworkPolicy
    apiVersion: networking.k8s.io/v1
    metadata:
      name: allow-http-and-https
    spec:
      podSelector:
        matchLabels:
          role: frontend
      ingress:
      - ports:
        - protocol: TCP
          port: 80
        - protocol: TCP
          port: 443
  • Accept connections by using both namespace and pod selectors:

    To match network traffic by combining namespace and pod selectors, you can use a NetworkPolicy object similar to the following:

    kind: NetworkPolicy
    apiVersion: networking.k8s.io/v1
    metadata:
      name: allow-pod-and-namespace-both
    spec:
      podSelector:
        matchLabels:
          name: test-pods
      ingress:
        - from:
          - namespaceSelector:
              matchLabels:
                project: project_name
            podSelector:
              matchLabels:
                name: test-pods

NetworkPolicy objects are additive, which means you can combine multiple NetworkPolicy objects together to satisfy complex network requirements.

For example, for the NetworkPolicy objects defined in previous samples, you can define both allow-same-namespace and allow-http-and-https policies within the same project. Thus allowing the pods with the label role=frontend, to accept any connection allowed by each policy. That is, connections on any port from pods in the same namespace, and connections on ports 80 and 443 from pods in any namespace.

Using the allow-from-router network policy

Use the following NetworkPolicy to allow external traffic regardless of the router configuration:

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: allow-from-router
spec:
  ingress:
  - from:
    - namespaceSelector:
        matchLabels:
          policy-group.network.openshift.io/ingress: ""(1)
  podSelector: {}
  policyTypes:
  - Ingress
1 policy-group.network.openshift.io/ingress:"" label supports both OpenShift-SDN and OVN-Kubernetes.

Using the allow-from-hostnetwork network policy

Add the following allow-from-hostnetwork NetworkPolicy object to direct traffic from the host network pods:

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: allow-from-hostnetwork
spec:
  ingress:
  - from:
    - namespaceSelector:
        matchLabels:
          policy-group.network.openshift.io/host-network: ""
  podSelector: {}
  policyTypes:
  - Ingress

Example NetworkPolicy object

The following annotates an example NetworkPolicy object:

kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
  name: allow-27107 (1)
spec:
  podSelector: (2)
    matchLabels:
      app: mongodb
  ingress:
  - from:
    - podSelector: (3)
        matchLabels:
          app: app
    ports: (4)
    - protocol: TCP
      port: 27017
1 The name of the NetworkPolicy object.
2 A selector that describes the pods to which the policy applies. The policy object can only select pods in the project that defines the NetworkPolicy object.
3 A selector that matches the pods from which the policy object allows ingress traffic. The selector matches pods in the same namespace as the NetworkPolicy.
4 A list of one or more destination ports on which to accept traffic.

Creating a network policy using the CLI

To define granular rules describing ingress or egress network traffic allowed for namespaces in your cluster, you can create a network policy.

If you log in with a user with the cluster-admin role, then you can create a network policy in any namespace in the cluster.

Prerequisites
  • Your cluster uses a cluster network provider that supports NetworkPolicy objects, such as the OpenShift SDN network provider with mode: NetworkPolicy set. This mode is the default for OpenShift SDN.

  • You installed the OpenShift CLI (oc).

  • You are logged in to the cluster with a user with admin privileges.

  • You are working in the namespace that the network policy applies to.

Procedure
  1. Create a policy rule:

    1. Create a <policy_name>.yaml file:

      $ touch <policy_name>.yaml

      where:

      <policy_name>

      Specifies the network policy file name.

    2. Define a network policy in the file that you just created, such as in the following examples:

      Deny ingress from all pods in all namespaces
      kind: NetworkPolicy
      apiVersion: networking.k8s.io/v1
      metadata:
        name: deny-by-default
      spec:
        podSelector:
        ingress: []
      Allow ingress from all pods in the same namespace
      kind: NetworkPolicy
      apiVersion: networking.k8s.io/v1
      metadata:
        name: allow-same-namespace
      spec:
        podSelector:
        ingress:
        - from:
          - podSelector: {}
  2. To create the network policy object, enter the following command:

    $ oc apply -f <policy_name>.yaml -n <namespace>

    where:

    <policy_name>

    Specifies the network policy file name.

    <namespace>

    Optional: Specifies the namespace if the object is defined in a different namespace than the current namespace.

    Example output
    networkpolicy.networking.k8s.io/default-deny created

If you log in to the web console with cluster-admin privileges, you have a choice of creating a network policy in any namespace in the cluster directly in YAML or from a form in the web console.

Configuring multitenant isolation by using network policy

You can configure your project to isolate it from pods and services in other project namespaces.

Prerequisites
  • Your cluster uses a cluster network provider that supports NetworkPolicy objects, such as the OpenShift SDN network provider with mode: NetworkPolicy set. This mode is the default for OpenShift SDN.

  • You installed the OpenShift CLI (oc).

  • You are logged in to the cluster with a user with admin privileges.

Procedure
  1. Create the following NetworkPolicy objects:

    1. A policy named allow-from-openshift-ingress.

      $ cat << EOF| oc create -f -
      apiVersion: networking.k8s.io/v1
      kind: NetworkPolicy
      metadata:
        name: allow-from-openshift-ingress
      spec:
        ingress:
        - from:
          - namespaceSelector:
              matchLabels:
                policy-group.network.openshift.io/ingress: ""
        podSelector: {}
        policyTypes:
        - Ingress
      EOF

      policy-group.network.openshift.io/ingress: "" is the preferred namespace selector label for OpenShift SDN. You can use the network.openshift.io/policy-group: ingress namespace selector label, but this is a legacy label.

    2. A policy named allow-from-openshift-monitoring:

      $ cat << EOF| oc create -f -
      apiVersion: networking.k8s.io/v1
      kind: NetworkPolicy
      metadata:
        name: allow-from-openshift-monitoring
      spec:
        ingress:
        - from:
          - namespaceSelector:
              matchLabels:
                network.openshift.io/policy-group: monitoring
        podSelector: {}
        policyTypes:
        - Ingress
      EOF
    3. A policy named allow-same-namespace:

      $ cat << EOF| oc create -f -
      kind: NetworkPolicy
      apiVersion: networking.k8s.io/v1
      metadata:
        name: allow-same-namespace
      spec:
        podSelector:
        ingress:
        - from:
          - podSelector: {}
      EOF
    4. A policy named allow-from-kube-apiserver-operator:

      $ cat << EOF| oc create -f -
      apiVersion: networking.k8s.io/v1
      kind: NetworkPolicy
      metadata:
        name: allow-from-kube-apiserver-operator
      spec:
        ingress:
        - from:
          - namespaceSelector:
              matchLabels:
                kubernetes.io/metadata.name: openshift-kube-apiserver-operator
            podSelector:
              matchLabels:
                app: kube-apiserver-operator
        policyTypes:
        - Ingress
      EOF
  2. Optional: To confirm that the network policies exist in your current project, enter the following command:

    $ oc describe networkpolicy
    Example output
    Name:         allow-from-openshift-ingress
    Namespace:    example1
    Created on:   2020-06-09 00:28:17 -0400 EDT
    Labels:       <none>
    Annotations:  <none>
    Spec:
      PodSelector:     <none> (Allowing the specific traffic to all pods in this namespace)
      Allowing ingress traffic:
        To Port: <any> (traffic allowed to all ports)
        From:
          NamespaceSelector: network.openshift.io/policy-group: ingress
      Not affecting egress traffic
      Policy Types: Ingress
    
    
    Name:         allow-from-openshift-monitoring
    Namespace:    example1
    Created on:   2020-06-09 00:29:57 -0400 EDT
    Labels:       <none>
    Annotations:  <none>
    Spec:
      PodSelector:     <none> (Allowing the specific traffic to all pods in this namespace)
      Allowing ingress traffic:
        To Port: <any> (traffic allowed to all ports)
        From:
          NamespaceSelector: network.openshift.io/policy-group: monitoring
      Not affecting egress traffic
      Policy Types: Ingress

Creating default network policies for a new project

As a cluster administrator, you can modify the new project template to automatically include NetworkPolicy objects when you create a new project.

Modifying the template for new projects

As a cluster administrator, you can modify the default project template so that new projects are created using your custom requirements.

To create your own custom project template:

Procedure
  1. Log in as a user with cluster-admin privileges.

  2. Generate the default project template:

    $ oc adm create-bootstrap-project-template -o yaml > template.yaml
  3. Use a text editor to modify the generated template.yaml file by adding objects or modifying existing objects.

  4. The project template must be created in the openshift-config namespace. Load your modified template:

    $ oc create -f template.yaml -n openshift-config
  5. Edit the project configuration resource using the web console or CLI.

    • Using the web console:

      1. Navigate to the AdministrationCluster Settings page.

      2. Click Configuration to view all configuration resources.

      3. Find the entry for Project and click Edit YAML.

    • Using the CLI:

      1. Edit the project.config.openshift.io/cluster resource:

        $ oc edit project.config.openshift.io/cluster
  6. Update the spec section to include the projectRequestTemplate and name parameters, and set the name of your uploaded project template. The default name is project-request.

    Project configuration resource with custom project template
    apiVersion: config.openshift.io/v1
    kind: Project
    metadata:
      ...
    spec:
      projectRequestTemplate:
        name: <template_name>
  7. After you save your changes, create a new project to verify that your changes were successfully applied.

Adding network policies to the new project template

As a cluster administrator, you can add network policies to the default template for new projects. OpenShift Container Platform will automatically create all the NetworkPolicy objects specified in the template in the project.

Prerequisites
  • Your cluster uses a default CNI network provider that supports NetworkPolicy objects, such as the OpenShift SDN network provider with mode: NetworkPolicy set. This mode is the default for OpenShift SDN.

  • You installed the OpenShift CLI (oc).

  • You must log in to the cluster with a user with cluster-admin privileges.

  • You must have created a custom default project template for new projects.

Procedure
  1. Edit the default template for a new project by running the following command:

    $ oc edit template <project_template> -n openshift-config

    Replace <project_template> with the name of the default template that you configured for your cluster. The default template name is project-request.

  2. In the template, add each NetworkPolicy object as an element to the objects parameter. The objects parameter accepts a collection of one or more objects.

    In the following example, the objects parameter collection includes several NetworkPolicy objects.

    objects:
    - apiVersion: networking.k8s.io/v1
      kind: NetworkPolicy
      metadata:
        name: allow-from-same-namespace
      spec:
        podSelector: {}
        ingress:
        - from:
          - podSelector: {}
    - apiVersion: networking.k8s.io/v1
      kind: NetworkPolicy
      metadata:
        name: allow-from-openshift-ingress
      spec:
        ingress:
        - from:
          - namespaceSelector:
              matchLabels:
                network.openshift.io/policy-group: ingress
        podSelector: {}
        policyTypes:
        - Ingress
    - apiVersion: networking.k8s.io/v1
      kind: NetworkPolicy
      metadata:
        name: allow-from-kube-apiserver-operator
      spec:
        ingress:
        - from:
          - namespaceSelector:
              matchLabels:
                kubernetes.io/metadata.name: openshift-kube-apiserver-operator
            podSelector:
              matchLabels:
                app: kube-apiserver-operator
        policyTypes:
        - Ingress
    ...
  3. Optional: Create a new project to confirm that your network policy objects are created successfully by running the following commands:

    1. Create a new project:

      $ oc new-project <project> (1)
      1 Replace <project> with the name for the project you are creating.
    2. Confirm that the network policy objects in the new project template exist in the new project:

      $ oc get networkpolicy
      NAME                           POD-SELECTOR   AGE
      allow-from-openshift-ingress   <none>         7s
      allow-from-same-namespace      <none>         7s

Supported configurations

The following configurations are supported for the current release of Red Hat OpenShift Service Mesh.

Supported platforms

The Red Hat OpenShift Service Mesh Operator supports multiple versions of the ServiceMeshControlPlane resource. Version 2.4 Service Mesh control planes are supported on the following platform versions:

  • Red Hat OpenShift Container Platform version 4.10 or later.

  • Red Hat OpenShift Dedicated version 4.

  • Azure Red Hat OpenShift (ARO) version 4.

  • Red Hat OpenShift Service on AWS (ROSA).

Unsupported configurations

Explicitly unsupported cases include:

  • OpenShift Online is not supported for Red Hat OpenShift Service Mesh.

  • Red Hat OpenShift Service Mesh does not support the management of microservices outside the cluster where Service Mesh is running.

Supported network configurations

Red Hat OpenShift Service Mesh supports the following network configurations.

  • OpenShift-SDN

  • OVN-Kubernetes is available on all supported versions of OpenShift Container Platform.

  • Third-Party Container Network Interface (CNI) plugins that have been certified on OpenShift Container Platform and passed Service Mesh conformance testing. See Certified OpenShift CNI Plug-ins for more information.

Supported configurations for Service Mesh

  • This release of Red Hat OpenShift Service Mesh is only available on OpenShift Container Platform x86_64, IBM Z, and IBM Power.

    • IBM Z is only supported on OpenShift Container Platform 4.10 and later.

    • IBM Power is only supported on OpenShift Container Platform 4.10 and later.

  • Configurations where all Service Mesh components are contained within a single OpenShift Container Platform cluster.

  • Configurations that do not integrate external services such as virtual machines.

  • Red Hat OpenShift Service Mesh does not support EnvoyFilter configuration except where explicitly documented.

Supported configurations for Kiali

  • The Kiali console is only supported on the two most recent releases of the Google Chrome, Microsoft Edge, Mozilla Firefox, or Apple Safari browsers.

  • The openshift authentication strategy is the only supported authentication configuration when Kiali is deployed with Red Hat OpenShift Service Mesh (OSSM). The openshift strategy controls access based on the individual’s role-based access control (RBAC) roles of the OpenShift Container Platform.

Supported configurations for Distributed Tracing

  • Jaeger agent as a sidecar is the only supported configuration for Jaeger. Jaeger as a daemonset is not supported for multitenant installations or OpenShift Dedicated.

Supported WebAssembly module

  • 3scale WebAssembly is the only provided WebAssembly module. You can create custom WebAssembly modules.

Operator overview

Red Hat OpenShift Service Mesh requires the following four Operators:

  • OpenShift Elasticsearch - (Optional) Provides database storage for tracing and logging with the distributed tracing platform (Jaeger). It is based on the open source Elasticsearch project.

  • Red Hat OpenShift distributed tracing platform (Jaeger) - Provides distributed tracing to monitor and troubleshoot transactions in complex distributed systems. It is based on the open source Jaeger project.

  • Kiali Operator provided by Red Hat - Provides observability for your service mesh. You can view configurations, monitor traffic, and analyze traces in a single console. It is based on the open source Kiali project.

  • Red Hat OpenShift Service Mesh - Allows you to connect, secure, control, and observe the microservices that comprise your applications. The Service Mesh Operator defines and monitors the ServiceMeshControlPlane resources that manage the deployment, updating, and deletion of the Service Mesh components. It is based on the open source Istio project.

Next steps

Optimizing routing

The OpenShift Container Platform HAProxy router can be scaled or configured to optimize performance.

Baseline Ingress Controller (router) performance

The OpenShift Container Platform Ingress Controller, or router, is the ingress point for ingress traffic for applications and services that are configured using routes and ingresses.

When evaluating a single HAProxy router performance in terms of HTTP requests handled per second, the performance varies depending on many factors. In particular:

  • HTTP keep-alive/close mode

  • Route type

  • TLS session resumption client support

  • Number of concurrent connections per target route

  • Number of target routes

  • Back end server page size

  • Underlying infrastructure (network/SDN solution, CPU, and so on)

While performance in your specific environment will vary, Red Hat lab tests on a public cloud instance of size 4 vCPU/16GB RAM. A single HAProxy router handling 100 routes terminated by backends serving 1kB static pages is able to handle the following number of transactions per second.

In HTTP keep-alive mode scenarios:

Encryption LoadBalancerService HostNetwork

none

21515

29622

edge

16743

22913

passthrough

36786

53295

re-encrypt

21583

25198

In HTTP close (no keep-alive) scenarios:

Encryption LoadBalancerService HostNetwork

none

5719

8273

edge

2729

4069

passthrough

4121

5344

re-encrypt

2320

2941

The default Ingress Controller configuration was used with the spec.tuningOptions.threadCount field set to 4. Two different endpoint publishing strategies were tested: Load Balancer Service and Host Network. TLS session resumption was used for encrypted routes. With HTTP keep-alive, a single HAProxy router is capable of saturating a 1 Gbit NIC at page sizes as small as 8 kB.

When running on bare metal with modern processors, you can expect roughly twice the performance of the public cloud instance above. This overhead is introduced by the virtualization layer in place on public clouds and holds mostly true for private cloud-based virtualization as well. The following table is a guide to how many applications to use behind the router:

Number of applications Application type

5-10

static file/web server or caching proxy

100-1000

applications generating dynamic content

In general, HAProxy can support routes for up to 1000 applications, depending on the technology in use. Ingress Controller performance might be limited by the capabilities and performance of the applications behind it, such as language or static versus dynamic content.

Ingress, or router, sharding should be used to serve more routes towards applications and help horizontally scale the routing tier.

Postinstallation RHOSP network configuration

You can configure some aspects of an OpenShift Container Platform on Red Hat OpenStack Platform (RHOSP) cluster after installation.

Configuring application access with floating IP addresses

After you install OpenShift Container Platform, configure Red Hat OpenStack Platform (RHOSP) to allow application network traffic.

You do not need to perform this procedure if you provided values for platform.openstack.apiFloatingIP and platform.openstack.ingressFloatingIP in the install-config.yaml file, or os_api_fip and os_ingress_fip in the inventory.yaml playbook, during installation. The floating IP addresses are already set.

Prerequisites
  • OpenShift Container Platform cluster must be installed

  • Floating IP addresses are enabled as described in the OpenShift Container Platform on RHOSP installation documentation.

Procedure

After you install the OpenShift Container Platform cluster, attach a floating IP address to the ingress port:

  1. Show the port:

    $ openstack port show <cluster_name>-<cluster_ID>-ingress-port
  2. Attach the port to the IP address:

    $ openstack floating ip set --port <ingress_port_ID> <apps_FIP>
  3. Add a wildcard A record for *apps. to your DNS file:

    *.apps.<cluster_name>.<base_domain>  IN  A  <apps_FIP>

If you do not control the DNS server but want to enable application access for non-production purposes, you can add these hostnames to /etc/hosts:

<apps_FIP> console-openshift-console.apps.<cluster name>.<base domain>
<apps_FIP> integrated-oauth-server-openshift-authentication.apps.<cluster name>.<base domain>
<apps_FIP> oauth-openshift.apps.<cluster name>.<base domain>
<apps_FIP> prometheus-k8s-openshift-monitoring.apps.<cluster name>.<base domain>
<apps_FIP> <app name>.apps.<cluster name>.<base domain>

Kuryr ports pools

A Kuryr ports pool maintains a number of ports on standby for pod creation.

Keeping ports on standby minimizes pod creation time. Without ports pools, Kuryr must explicitly request port creation or deletion whenever a pod is created or deleted.

The Neutron ports that Kuryr uses are created in subnets that are tied to namespaces. These pod ports are also added as subports to the primary port of OpenShift Container Platform cluster nodes.

Because Kuryr keeps each namespace in a separate subnet, a separate ports pool is maintained for each namespace-worker pair.

Prior to installing a cluster, you can set the following parameters in the cluster-network-03-config.yml manifest file to configure ports pool behavior:

  • The enablePortPoolsPrepopulation parameter controls pool prepopulation, which forces Kuryr to add Neutron ports to the pools when the first pod that is configured to use the dedicated network for pods is created in a namespace. The default value is false.

  • The poolMinPorts parameter is the minimum number of free ports that are kept in the pool. The default value is 1.

  • The poolMaxPorts parameter is the maximum number of free ports that are kept in the pool. A value of 0 disables that upper bound. This is the default setting.

    If your OpenStack port quota is low, or you have a limited number of IP addresses on the pod network, consider setting this option to ensure that unneeded ports are deleted.

  • The poolBatchPorts parameter defines the maximum number of Neutron ports that can be created at once. The default value is 3.

Adjusting Kuryr ports pool settings in active deployments on RHOSP

You can use a custom resource (CR) to configure how Kuryr manages Red Hat OpenStack Platform (RHOSP) Neutron ports to control the speed and efficiency of pod creation on a deployed cluster.

Procedure
  1. From a command line, open the Cluster Network Operator (CNO) CR for editing:

    $ oc edit networks.operator.openshift.io cluster
  2. Edit the settings to meet your requirements. The following file is provided as an example:

    apiVersion: operator.openshift.io/v1
    kind: Network
    metadata:
      name: cluster
    spec:
      clusterNetwork:
      - cidr: 10.128.0.0/14
        hostPrefix: 23
      serviceNetwork:
      - 172.30.0.0/16
      defaultNetwork:
        type: Kuryr
        kuryrConfig:
          enablePortPoolsPrepopulation: false (1)
          poolMinPorts: 1 (2)
          poolBatchPorts: 3 (3)
          poolMaxPorts: 5 (4)
    1 Set enablePortPoolsPrepopulation to true to make Kuryr create Neutron ports when the first pod that is configured to use the dedicated network for pods is created in a namespace. This setting raises the Neutron ports quota but can reduce the time that is required to spawn pods. The default value is false.
    2 Kuryr creates new ports for a pool if the number of free ports in that pool is lower than the value of poolMinPorts. The default value is 1.
    3 poolBatchPorts controls the number of new ports that are created if the number of free ports is lower than the value of poolMinPorts. The default value is 3.
    4 If the number of free ports in a pool is higher than the value of poolMaxPorts, Kuryr deletes them until the number matches that value. Setting the value to 0 disables this upper bound, preventing pools from shrinking. The default value is 0.
  3. Save your changes and quit the text editor to commit your changes.

Modifying these options on a running cluster forces the kuryr-controller and kuryr-cni pods to restart. As a result, the creation of new pods and services will be delayed.

Enabling OVS hardware offloading

For clusters that run on Red Hat OpenStack Platform (RHOSP), you can enable Open vSwitch (OVS) hardware offloading.

OVS is a multi-layer virtual switch that enables large-scale, multi-server network virtualization.

Prerequisites
  • You installed a cluster on RHOSP that is configured for single-root input/output virtualization (SR-IOV).

  • You installed the SR-IOV Network Operator on your cluster.

  • You created two hw-offload type virtual function (VF) interfaces on your cluster.

Application layer gateway flows are broken in OpenShift Container Platform version 4.10, 4.11, and 4.12. Also, you cannot offload the application layer gateway flow for OpenShift Container Platform version 4.13.

Procedure
  1. Create an SriovNetworkNodePolicy policy for the two hw-offload type VF interfaces that are on your cluster:

    The first virtual function interface
    apiVersion: sriovnetwork.openshift.io/v1
    kind: SriovNetworkNodePolicy (1)
    metadata:
      name: "hwoffload9"
      namespace: openshift-sriov-network-operator
    spec:
      deviceType: netdevice
      isRdma: true
      nicSelector:
        pfNames: (2)
        - ens6
      nodeSelector:
        feature.node.kubernetes.io/network-sriov.capable: 'true'
      numVfs: 1
      priority: 99
      resourceName: "hwoffload9"
    1 Insert the SriovNetworkNodePolicy value here.
    2 Both interfaces must include physical function (PF) names.
    The second virtual function interface
    apiVersion: sriovnetwork.openshift.io/v1
    kind: SriovNetworkNodePolicy (1)
    metadata:
      name: "hwoffload10"
      namespace: openshift-sriov-network-operator
    spec:
      deviceType: netdevice
      isRdma: true
      nicSelector:
        pfNames: (2)
        - ens5
      nodeSelector:
        feature.node.kubernetes.io/network-sriov.capable: 'true'
      numVfs: 1
      priority: 99
      resourceName: "hwoffload10"
    1 Insert the SriovNetworkNodePolicy value here.
    2 Both interfaces must include physical function (PF) names.
  2. Create NetworkAttachmentDefinition resources for the two interfaces:

    A NetworkAttachmentDefinition resource for the first interface
    apiVersion: k8s.cni.cncf.io/v1
    kind: NetworkAttachmentDefinition
    metadata:
      annotations:
        k8s.v1.cni.cncf.io/resourceName: openshift.io/hwoffload9
      name: hwoffload9
      namespace: default
    spec:
        config: '{ "cniVersion":"0.3.1", "name":"hwoffload9","type":"host-device","device":"ens6"
        }'
    A NetworkAttachmentDefinition resource for the second interface
    apiVersion: k8s.cni.cncf.io/v1
    kind: NetworkAttachmentDefinition
    metadata:
      annotations:
        k8s.v1.cni.cncf.io/resourceName: openshift.io/hwoffload10
      name: hwoffload10
      namespace: default
    spec:
        config: '{ "cniVersion":"0.3.1", "name":"hwoffload10","type":"host-device","device":"ens5"
        }'
  3. Use the interfaces that you created with a pod. For example:

    A pod that uses the two OVS offload interfaces
    apiVersion: v1
    kind: Pod
    metadata:
      name: dpdk-testpmd
      namespace: default
      annotations:
        irq-load-balancing.crio.io: disable
        cpu-quota.crio.io: disable
        k8s.v1.cni.cncf.io/resourceName: openshift.io/hwoffload9
        k8s.v1.cni.cncf.io/resourceName: openshift.io/hwoffload10
    spec:
      restartPolicy: Never
      containers:
      - name: dpdk-testpmd
        image: quay.io/krister/centos8_nfv-container-dpdk-testpmd:latest

Attaching an OVS hardware offloading network

You can attach an Open vSwitch (OVS) hardware offloading network to your cluster.

Prerequisites
  • Your cluster is installed and running.

  • You provisioned an OVS hardware offloading network on Red Hat OpenStack Platform (RHOSP) to use with your cluster.

Procedure
  1. Create a file named network.yaml from the following template:

    spec:
      additionalNetworks:
      - name: hwoffload1
        namespace: cnf
        rawCNIConfig: '{ "cniVersion": "0.3.1", "name": "hwoffload1", "type": "host-device","pciBusId": "0000:00:05.0", "ipam": {}}' (1)
        type: Raw

    where:

    pciBusId

    Specifies the device that is connected to the offloading network. If you do not have it, you can find this value by running the following command:

    $ oc describe SriovNetworkNodeState -n openshift-sriov-network-operator
  2. From a command line, enter the following command to patch your cluster with the file:

    $ oc apply -f network.yaml