This is a cache of https://docs.openshift.com/container-platform/4.5/service_mesh/v1x/ossm-traffic-manage.html. It is a snapshot of the page at 2024-11-23T00:55:17.139+0000.
Traffic management - <strong>service</strong> Mesh 1.x | <strong>service</strong> Mesh | OpenShift Container Platform 4.5
×

You can control the flow of traffic and API calls between services in Red Hat OpenShift service Mesh. For example, some services in your service mesh may need to communicate within the mesh and others may need to be hidden. Manage the traffic to hide specific backend services, expose services, create testing or versioning deployments, or add a security layer on a set of services.

This guide references the Bookinfo sample application to provide examples of routing in an example application. Install the Bookinfo application to learn how these routing examples work.

Routing and managing traffic

Configure your service mesh by adding your own traffic configuration to Red Hat OpenShift service Mesh with a custom resource definitions in a YAML file.

Traffic management with virtual services

You can route requests dynamically to multiple versions of a microservice through Red Hat OpenShift service Mesh with a virtual service. With virtual services, you can:

  • Address multiple application services through a single virtual service. If your mesh uses Kubernetes, for example, you can configure a virtual service to handle all services in a specific namespace. Mapping a single virtual service to many services is particularly useful in facilitating turning a monolithic application into a composite service built out of distinct microservices without requiring the consumers of the service to adapt to the transition.

  • Configure traffic rules in combination with gateways to control ingress and egress traffic.

Configuring virtual services

Requests are routed to a services within a service mesh with virtual services. Each virtual service consists of a set of routing rules that are evaluated in order. Red Hat OpenShift service Mesh matches each given request to the virtual service to a specific real destination within the mesh.

Without virtual services, Red Hat OpenShift service Mesh distributes traffic using round-robin load balancing between all service instances. With a virtual service, you can specify traffic behavior for one or more hostnames. Routing rules in the virtual service tell Red Hat OpenShift service Mesh how to send the traffic for the virtual service to appropriate destinations. Route destinations can be versions of the same service or entirely different services.

The following example routes requests to different versions of a service depending on which user connects to the application. Use this command to apply this example YAML file, or one you create.

$ oc apply -f - <<EOF
apiVersion: networking.istio.io/v1alpha3
kind: Virtualservice
metadata:
  name: reviews
spec:
  hosts:
  - reviews
  http:
  - match:
    - headers:
        end-user:
          exact: jason
    route:
    - destination:
        host: reviews
        subset: v2
  - route:
    - destination:
        host: reviews
        subset: v3
EOF

Configuring your virtual host

The following sections explain each field in the YAML file and explain how you can create a virtual host in a virtual service.

Hosts

The hosts field lists the virtual service’s user-addressable destination that these routing rules apply to. This is the address or addresses the client uses when sending requests to the service.

The virtual service hostname can be an IP address, a DNS name, or, depending on the platform, a short name that resolves to a fully qualified domain name.

spec:
  hosts:
  - reviews

Routing rules

The http section contains the virtual service’s routing rules, describing match conditions and actions for routing HTTP/1.1, HTTP2, and gRPC traffic sent to the destination specified in the hosts field. A routing rule consists of the destination where you want the traffic to go and zero or more match conditions, depending on your use case.

Match condition

The first routing rule in the example has a condition and begins with the match field. In this example, this routing applies to all requests from the user jason. Add the headers, end-user, and exact fields to select the appropriate requests.

spec:
  hosts:
  - reviews
  http:
  - match:
    - headers:
      end-user:
        exact: jason
Destination

The destination field in the route section specifies the actual destination for traffic that matches this condition. Unlike the virtual service’s host, the destination’s host must be a real destination that exists in the Red Hat OpenShift service Mesh service registry. This can be a mesh service with proxies or a non-mesh service added using a service entry. In this example, the host name is a Kubernetes service name:

spec:
  hosts:
  - reviews
  http:
  - match:
    - headers:
      end-user:
        exact: jason
    route:
    - destination:
      host: reviews
      subset: v2

Destination rules

Destination rules are applied after virtual service routing rules are evaluated, so they apply to the traffic’s real destination. Virtual services route traffic to a destination. Destination rules configure what happens to traffic at that destination.

Load balancing options

By default, Red Hat OpenShift service Mesh uses a round-robin load balancing policy, where each service instance in the instance pool gets a request in turn. Red Hat OpenShift service Mesh also supports the following models, which you can specify in destination rules for requests to a particular service or service subset.

  • Random: Requests are forwarded at random to instances in the pool.

  • Weighted: Requests are forwarded to instances in the pool according to a specific percentage.

  • Least requests: Requests are forwarded to instances with the least number of requests.

Destination rule example

The following example destination rule configures three different subsets for the my-svc destination service, with different load balancing policies:

apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
  name: my-destination-rule
spec:
  host: my-svc
  trafficPolicy:
    loadBalancer:
      simple: RANDOM
  subsets:
  - name: v1
    labels:
      version: v1
  - name: v2
    labels:
      version: v2
    trafficPolicy:
      loadBalancer:
        simple: ROUND_ROBIN
  - name: v3
    labels:
      version: v3

Gateways

You can use a gateway to manage inbound and outbound traffic for your mesh to specify which traffic you want to enter or leave the mesh. Gateway configurations are applied to standalone Envoy proxies that are running at the edge of the mesh, rather than sidecar Envoy proxies running alongside your service workloads.

Unlike other mechanisms for controlling traffic entering your systems, such as the Kubernetes Ingress APIs, Red Hat OpenShift service Mesh gateways let you use the full power and flexibility of traffic routing. The Red Hat OpenShift service Mesh gateway resource can layer 4-6 load balancing properties such as ports to expose, Red Hat OpenShift service Mesh TLS settings. Instead of adding application-layer traffic routing (L7) to the same API resource, you can bind a regular Red Hat OpenShift service Mesh virtual service to the gateway and manage gateway traffic like any other data plane traffic in a service mesh.

Gateways are primarily used to manage ingress traffic, but you can also configure egress gateways. An egress gateway lets you configure a dedicated exit node for the traffic leaving the mesh, letting you limit which services have access to external networks, or to enable secure control of egress traffic to add security to your mesh, for example. You can also use a gateway to configure a purely internal proxy.

Gateway example

The following example shows a possible gateway configuration for external HTTPS ingress traffic:

apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
  name: ext-host-gwy
spec:
  selector:
    istio: ingressgateway # use istio default controller
  servers:
  - port:
    number: 443
    name: https
    protocol: HTTPS
    hosts:
    - ext-host.example.com
    tls:
      mode: SIMPLE
      serverCertificate: /tmp/tls.crt
      privateKey: /tmp/tls.key

This gateway configuration lets HTTPS traffic from ext-host.example.com into the mesh on port 443, but doesn’t specify any routing for the traffic.

To specify routing and for the gateway to work as intended, you must also bind the gateway to a virtual service. You do this using the virtual service’s gateways field, as shown in the following example:

apiVersion: networking.istio.io/v1alpha3
kind: Virtualservice
metadata:
  name: virtual-svc
spec:
  hosts:
  - ext-host.example.com
  gateways:
    - ext-host-gwy

You can then configure the virtual service with routing rules for the external traffic.

service entries

A service entry adds an entry to the service registry that Red Hat OpenShift service Mesh maintains internally. After you add the service entry, the Envoy proxies can send traffic to the service as if it was a service in your mesh. Configuring service entries allows you to manage traffic for services running outside of the mesh, including the following tasks:

  • Redirect and forward traffic for external destinations, such as APIs consumed from the web, or traffic to services in legacy infrastructure.

  • Define retry, timeout, and fault injection policies for external destinations.

  • Run a mesh service in a Virtual Machine (VM) by adding VMs to your mesh.

  • Logically add services from a different cluster to the mesh to configure a multicluster Red Hat OpenShift service Mesh mesh on Kubernetes.

  • You don’t need to add a service entry for every external service that you want your mesh services to use. By default, Red Hat OpenShift service Mesh configures the Envoy proxies to pass requests through to unknown services. However, you can’t use Red Hat OpenShift service Mesh features to control the traffic to destinations that aren’t registered in the mesh.

service entry examples

The following example mesh-external service entry adds the ext-resource external dependency to the Red Hat OpenShift service Mesh service registry:

apiVersion: networking.istio.io/v1alpha3
kind: serviceEntry
metadata:
  name: svc-entry
spec:
  hosts:
  - ext-svc.example.com
  ports:
  - number: 443
    name: https
    protocol: HTTPS
  location: MESH_EXTERNAL
  resolution: DNS

Specify the external resource using the hosts field. You can qualify it fully or use a wildcard prefixed domain name.

You can configure virtual services and destination rules to control traffic to a service entry in the same way you configure traffic for any other service in the mesh. For example, the following destination rule configures the traffic route to use mutual TLS to secure the connection to the ext-svc.example.com external service that is configured using the service entry:

apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
  name: ext-res-dr
spec:
  host: ext-svc.example.com
  trafficPolicy:
    tls:
      mode: MUTUAL
      clientCertificate: /etc/certs/myclientcert.pem
      privateKey: /etc/certs/client_private_key.pem
      caCertificates: /etc/certs/rootcacerts.pem

Sidecar

By default, Red Hat OpenShift service Mesh configures every Envoy proxy to accept traffic on all the ports of its associated workload, and to reach every workload in the mesh when forwarding traffic. You can use a sidecar configuration to do the following:

  • Fine-tune the set of ports and protocols that an Envoy proxy accepts.

  • Limit the set of services that the Envoy proxy can reach.

  • You might want to limit sidecar reachability like this in larger applications, where having every proxy configured to reach every other service in the mesh can potentially affect mesh performance due to high memory usage.

Sidecar example

You can specify that you want a sidecar configuration to apply to all workloads in a particular namespace, or choose specific workloads using a workloadSelector. For example, the following sidecar configuration configures all services in the bookinfo namespace to only reach services running in the same namespace and the Red Hat OpenShift service Mesh control plane (currently needed to use the Red Hat OpenShift service Mesh policy and telemetry features):

apiVersion: networking.istio.io/v1alpha3
kind: Sidecar
metadata:
  name: default
  namespace: bookinfo
spec:
  egress:
  - hosts:
    - "./*"
    - "istio-system/*"

Managing ingress traffic

In Red Hat OpenShift service Mesh, the Ingress Gateway enables service Mesh features such as monitoring, security, and route rules to be applied to traffic entering the cluster. Configure service Mesh to expose a service outside of the service mesh using an service Mesh gateway.

Determining the ingress IP and ports

Run the following command to determine if your Kubernetes cluster is running in an environment that supports external load balancers:

$ oc get svc istio-ingressgateway -n istio-system

That command returns the NAME, TYPE, CLUSTER-IP, EXTERNAL-IP, PORT(S), and AGE of each item in your namespace.

If the EXTERNAL-IP value is set, your environment has an external load balancer that you can use for the ingress gateway.

If the EXTERNAL-IP value is <none>, or perpetually <pending>, your environment does not provide an external load balancer for the ingress gateway. You can access the gateway using the service’s node port.

Choose the instructions for your environment:

Configuring routing with a load balancer

Follow these instructions if your environment has an external load balancer.

Set the ingress IP and ports:

$ export INGRESS_HOST=$(oc -n istio-system get service istio-ingressgateway -o jsonpath='{.status.loadBalancer.ingress[0].ip}')
$ export INGRESS_PORT=$(oc -n istio-system get service istio-ingressgateway -o jsonpath='{.spec.ports[?(@.name=="http2")].port}')
$ export SECURE_INGRESS_PORT=$(oc -n istio-system get service istio-ingressgateway -o jsonpath='{.spec.ports[?(@.name=="https")].port}')

In some environments, the load balancer may be exposed using a host name instead of an IP address. For that case, the ingress gateway’s EXTERNAL-IP value is not be an IP address. Instead, it’s a host name, and the previous command fails to set the INGRESS_HOST environment variable.

Use the following command to correct the INGRESS_HOST value:

$ export INGRESS_HOST=$(oc -n istio-system get service istio-ingressgateway -o jsonpath='{.status.loadBalancer.ingress[0].hostname}')
Configuring routing without a load balancer

Follow these instructions if your environment does not have an external load balancer. You must use a node port instead.

Set the ingress ports:

$ export INGRESS_PORT=$(oc -n istio-system get service istio-ingressgateway -o jsonpath='{.spec.ports[?(@.name=="http2")].nodePort}')
$ export SECURE_INGRESS_PORT=$(oc -n istio-system get service istio-ingressgateway -o jsonpath='{.spec.ports[?(@.name=="https")].nodePort}')

Routing example using the bookinfo application

The service Mesh Bookinfo sample application consists of four separate microservices, each with multiple versions. Three different versions, one of the microservices called reviews, have been deployed and are running concurrently.

Prerequisites:
  • Deploy the Bookinfo sample application to work with the following examples.

About this task

To illustrate the problem this causes, access the bookinfo app /product page in a browser and refresh several times.

Sometimes the book review output contains star ratings and other times it does not. Without an explicit default service version to route to, service Mesh routes requests to all available versions one after the other.

This tutorial helps you apply rules that route all traffic to v1 (version 1) of the microservices. Later, you can apply a rule to route traffic based on the value of an HTTP request header.

Applying a virtual service

To route to one version only, apply virtual services that set the default version for the micro-services. In the following example, the virtual service routes all traffic to v1 of each micro-service

  1. Run the following command to apply the virtual services:

    $ oc apply -f https://raw.githubusercontent.com/Maistra/istio/maistra-1.1/samples/bookinfo/networking/virtual-service-all-v1.yaml
  2. To test the command was successful, display the defined routes with the following command:

    $ oc get virtualservices -o yaml

    That command returns the following YAML file.

    apiVersion: networking.istio.io/v1alpha3
    kind: Virtualservice
    metadata:
      name: details
      ...
    spec:
      hosts:
      - details
      http:
      - route:
        - destination:
          host: details
          subset: v1
    ---
    apiVersion: networking.istio.io/v1alpha3
    kind: Virtualservice
    metadata:
      name: productpage
      ...
    spec:
      gateways:
      - bookinfo-gateway
      - mesh
      hosts:
      - productpage
      http:
      - route:
        - destination:
          host: productpage
          subset: v1
    ---
    apiVersion: networking.istio.io/v1alpha3
    kind: Virtualservice
    metadata:
      name: ratings
      ...
    spec:
      hosts:
      - ratings
      http:
      - route:
        - destination:
          host: ratings
          subset: v1
    ---
    apiVersion: networking.istio.io/v1alpha3
    kind: Virtualservice
    metadata:
      name: reviews
      ...
    spec:
      hosts:
      - reviews
      http:
      - route:
        - destination:
          host: reviews
          subset: v1

    You have configured service Mesh to route to the v1 version of the Bookinfo microservices, most importantly the reviews service version 1.

Test the new routing configuration

You can easily test the new configuration by once again refreshing the /productpage of the Bookinfo app.

  1. Open the Bookinfo site in your browser. The URL is http://$GATEWAY_URL/productpage, where $GATEWAY_URL is the External IP address of the ingress.

    The reviews part of the page displays with no rating stars, no matter how many times you refresh. This is because you configured service Mesh to route all traffic for the reviews service to the version reviews:v1 and this version of the service does not access the star ratings service.

    Your service mesh now routes traffic to one version of a service.

Route based on user identity

Next, change the route configuration so that all traffic from a specific user is routed to a specific service version. In this case, all traffic from a user named jason will be routed to the service reviews:v2.

Note that service Mesh doesn’t have any special, built-in understanding of user identity. This example is enabled by the fact that the productpage service adds a custom end-user header to all outbound HTTP requests to the reviews service.

  1. Run the following command to enable user-based routing:

    $ oc apply -f https://raw.githubusercontent.com/Maistra/istio/maistra-1.1/samples/bookinfo/networking/virtual-service-reviews-test-v2.yaml
  2. Confirm the rule is created:

    $ oc get virtualservice reviews -o yaml

    That command returns the following YAML file.

    apiVersion: networking.istio.io/v1alpha3
    kind: Virtualservice
    metadata:
      name: reviews
      ...
    spec:
      hosts:
      - reviews
      http:
      - match:
        - headers:
          end-user:
            exact: jason
        route:
        - destination:
          host: reviews
          subset: v2
      - route:
        - destination:
          host: reviews
          subset: v1
  3. On the /productpage of the Bookinfo app, log in as user jason. Refresh the browser. What do you see? The star ratings appear next to each review.

  4. Log in as another user (pick any name you wish). Refresh the browser. Now the stars are gone. This is because traffic is routed to reviews:v1 for all users except Jason.

You have successfully configured service Mesh to route traffic based on user identity.