Namespace Label
Red Hat OpenShift Service Mesh differs from an installation of Istio to provide additional features or to handle differences when deploying on OpenShift Container Platform.
The following features are different in Service Mesh and Istio.
The command line tool for Red Hat OpenShift Service Mesh is oc
. Red Hat OpenShift Service Mesh does not support istioctl
.
Red Hat OpenShift Service Mesh does not support Istio installation profiles.
Red Hat OpenShift Service Mesh does not support canary upgrades of the service mesh.
The upstream Istio community installation automatically injects the sidecar into pods within the projects you have labeled.
Red Hat OpenShift Service Mesh does not automatically inject the sidecar into any pods, but you must opt in to injection using an annotation without labeling projects. This method requires fewer privileges and does not conflict with other OpenShift Container Platform capabilities such as builder pods. To enable automatic injection, specify the sidecar.istio.io/inject
label, or annotation, as described in the Automatic sidecar injection section.
Upstream Istio | Red Hat OpenShift Service Mesh | |
---|---|---|
Namespace Label |
supports "enabled" and "disabled" |
supports "disabled" |
Pod Label |
supports "true" and "false" |
supports "true" and "false" |
Pod Annotation |
supports "false" only |
supports "true" and "false" |
Istio Role Based Access Control (RBAC) provides a mechanism you can use to control access to a service. You can identify subjects by user name or by specifying a set of properties and apply access controls accordingly.
The upstream Istio community installation includes options to perform exact header matches, match wildcards in headers, or check for a header containing a specific prefix or suffix.
Red Hat OpenShift Service Mesh extends the ability to match request headers by using a regular expression. Specify a property key of request.regex.headers
with a regular expression.
apiVersion: security.istio.io/v1beta1
kind: AuthorizationPolicy
metadata:
name: httpbin-usernamepolicy
spec:
action: ALLOW
rules:
- when:
- key: 'request.regex.headers[username]'
values:
- "allowed.*"
selector:
matchLabels:
app: httpbin
Red Hat OpenShift Service Mesh replaces BoringSSL with OpenSSL. OpenSSL is a software library that contains an open source implementation of the Secure Sockets Layer (SSL) and Transport Layer Security (TLS) protocols. The Red Hat OpenShift Service Mesh Proxy binary dynamically links the OpenSSL libraries (libssl and libcrypto) from the underlying Red Hat Enterprise Linux operating system.
Red Hat OpenShift Service Mesh does not support external workloads, such as virtual machines running outside OpenShift on bare metal servers.
You can deploy virtual machines to OpenShift using OpenShift Virtualization. Then, you can apply a mesh policy, such as mTLS or AuthorizationPolicy, to these virtual machines, just like any other pod that is part of a mesh.
A maistra-version label has been added to all resources.
All Ingress resources have been converted to OpenShift Route resources.
Grafana, distributed tracing (Jaeger), and Kiali are enabled by default and exposed through OpenShift routes.
Godebug has been removed from all templates
The istio-multi
ServiceAccount and ClusterRoleBinding have been removed, as well as the istio-reader
ClusterRole.
Red Hat OpenShift Service Mesh does not support EnvoyFilter
configuration except where explicitly documented. Due to tight coupling with the underlying Envoy APIs, backward compatibility cannot be maintained. EnvoyFilter
patches are very sensitive to the format of the Envoy configuration that is generated by Istio. If the configuration generated by Istio changes, it has the potential to break the application of the EnvoyFilter
.
Red Hat OpenShift Service Mesh includes CNI plugin, which provides you with an alternate way to configure application pod networking. The CNI plugin replaces the init-container
network configuration eliminating the need to grant service accounts and projects access to security context constraints (SCCs) with elevated privileges.
Red Hat OpenShift Service Mesh creates a PeerAuthentication
resource that enables or disables Mutual TLS authentication (mTLS) within the mesh.
Red Hat OpenShift Service Mesh installs ingress and egress gateways by default. You can disable gateway installation in the ServiceMeshControlPlane
(SMCP) resource by using the following settings:
spec.gateways.enabled=false
to disable both ingress and egress gateways.
spec.gateways.ingress.enabled=false
to disable ingress gateways.
spec.gateways.egress.enabled=false
to disable egress gateways.
The Operator annotates the default gateways to indicate that they are generated by and managed by the Red Hat OpenShift Service Mesh Operator. |
Red Hat OpenShift Service Mesh support for multicluster configurations is limited to the federation of service meshes across multiple clusters.
You cannot configure Red Hat OpenShift Service Mesh to process CSRs through the Kubernetes certificate authority (CA).
OpenShift routes for Istio Gateways are automatically managed in Red Hat OpenShift Service Mesh. Every time an Istio Gateway is created, updated or deleted inside the service mesh, an OpenShift route is created, updated or deleted.
A Red Hat OpenShift Service Mesh control plane component called Istio OpenShift Routing (IOR) synchronizes the gateway route. For more information, see Automatic route creation.
Catch-all domains ("*") are not supported. If one is found in the Gateway definition, Red Hat OpenShift Service Mesh will create the route, but will rely on OpenShift to create a default hostname. This means that the newly created route will not be a catch all ("*") route, instead it will have a hostname in the form <route-name>[-<project>].<suffix>
. See the OpenShift Container Platform documentation for more information about how default hostnames work and how a cluster-admin
can customize it. If you use Red Hat OpenShift Dedicated, refer to the Red Hat OpenShift Dedicated the dedicated-admin
role.
Subdomains (e.g.: "*.domain.com") are supported. However this ability doesn’t come enabled by default in OpenShift Container Platform. This means that Red Hat OpenShift Service Mesh will create the route with the subdomain, but it will only be in effect if OpenShift Container Platform is configured to enable it.
Whereas upstream Istio takes a single tenant approach, Red Hat OpenShift Service Mesh supports multiple independent control planes within the cluster. Red Hat OpenShift Service Mesh uses a multitenant operator to manage the control plane lifecycle.
Red Hat OpenShift Service Mesh installs a multitenant control plane by default. You specify the projects that can access the Service Mesh, and isolate the Service Mesh from other control plane instances.
The main difference between a multitenant installation and a cluster-wide installation is the scope of privileges used by istod. The components no longer use cluster-scoped Role Based Access Control (RBAC) resource ClusterRoleBinding
.
Every project in the ServiceMeshMemberRoll
members
list will have a RoleBinding
for each service account associated with the control plane deployment and each control plane deployment will only watch those member projects. Each member project has a maistra.io/member-of
label added to it, where the member-of
value is the project containing the control plane installation.
Red Hat OpenShift Service Mesh configures each member project to ensure network access between itself, the control plane, and other member projects. The exact configuration differs depending on how OpenShift Container Platform software-defined networking (SDN) is configured. See About OpenShift SDN for additional details.
If the OpenShift Container Platform cluster is configured to use the SDN plugin:
NetworkPolicy
: Red Hat OpenShift Service Mesh creates a NetworkPolicy
resource in each member project allowing ingress to all pods from the other members and the control plane. If you remove a member from Service Mesh, this NetworkPolicy
resource is deleted from the project.
This also restricts ingress to only member projects. If you require ingress from non-member projects, you need to create a |
Multitenant: Red Hat OpenShift Service Mesh joins the NetNamespace
for each member project to the NetNamespace
of the control plane project (the equivalent of running oc adm pod-network join-projects --to control-plane-project member-project
). If you remove a member from the Service Mesh, its NetNamespace
is isolated from the control plane (the equivalent of running oc adm pod-network isolate-projects member-project
).
Subnet: No additional configuration is performed.
Upstream Istio has two cluster scoped resources that it relies on. The MeshPolicy
and the ClusterRbacConfig
. These are not compatible with a multitenant cluster and have been replaced as described below.
ServiceMeshPolicy replaces MeshPolicy for configuration of control-plane-wide authentication policies. This must be created in the same project as the control plane.
ServicemeshRbacConfig replaces ClusterRbacConfig for configuration of control-plane-wide role based access control. This must be created in the same project as the control plane.
Installing Kiali via the Service Mesh on OpenShift Container Platform differs from community Kiali installations in multiple ways. These modifications are sometimes necessary to resolve issues, provide additional features, or to handle differences when deploying on OpenShift Container Platform.
Kiali has been enabled by default.
Ingress has been enabled by default.
Updates have been made to the Kiali configmap.
Updates have been made to the ClusterRole settings for Kiali.
Do not edit the configmap, because your changes might be overwritten by the Service Mesh or Kiali Operators. Files that the Kiali Operator manages have a kiali.io/
label or annotation. Updating the Operator files should be restricted to those users with cluster-admin
privileges. If you use Red Hat OpenShift Dedicated, updating the Operator files should be restricted to those users with dedicated-admin
privileges.
Installing the distributed tracing platform (Jaeger) with the Service Mesh on OpenShift Container Platform differs from community Jaeger installations in multiple ways. These modifications are sometimes necessary to resolve issues, provide additional features, or to handle differences when deploying on OpenShift Container Platform.
Distributed tracing has been enabled by default for Service Mesh.
Ingress has been enabled by default for Service Mesh.
The name for the Zipkin port name has changed to jaeger-collector-zipkin
(from http
)
Jaeger uses Elasticsearch for storage by default when you select either the production
or streaming
deployment option.
The community version of Istio provides a generic "tracing" route. Red Hat OpenShift Service Mesh uses a "jaeger" route that is installed by the Red Hat OpenShift distributed tracing platform (Jaeger) Operator and is already protected by OAuth.
Red Hat OpenShift Service Mesh uses a sidecar for the Envoy proxy, and Jaeger also uses a sidecar, for the Jaeger agent. These two sidecars are configured separately and should not be confused with each other. The proxy sidecar creates spans related to the pod’s ingress and egress traffic. The agent sidecar receives the spans emitted by the application and sends them to the Jaeger Collector.