Automatic
To access the most current features of Red Hat OpenShift service Mesh, upgrade to the current version, 2.6.3.
Red Hat uses semantic versioning for product releases. Semantic Versioning is a 3-component number in the format of X.Y.Z, where:
X stands for a Major version. Major releases usually denote some sort of breaking change: architectural changes, API changes, schema changes, and similar major updates.
Y stands for a Minor version. Minor releases contain new features and functionality while maintaining backwards compatibility.
Z stands for a Patch version (also known as a z-stream release). Patch releases are used to addresses Common Vulnerabilities and Exposures (CVEs) and release bug fixes. New features and functionality are generally not released as part of a Patch release.
Depending on the version of the update you are making, the upgrade process is different.
Patch updates - Patch upgrades are managed by the Operator Lifecycle Manager (OLM); they happen automatically when you update your Operators.
Minor upgrades - Minor upgrades require both updating to the most recent Red Hat OpenShift service Mesh Operator version and manually modifying the spec.version
value in your serviceMeshControlPlane
resources.
Major upgrades - Major upgrades require both updating to the most recent Red Hat OpenShift service Mesh Operator version and manually modifying the spec.version
value in your serviceMeshControlPlane
resources. Because major upgrades can contain changes that are not backwards compatible, additional manual changes might be required.
In order to understand what version of Red Hat OpenShift service Mesh you have deployed on your system, you need to understand how each of the component versions is managed.
Operator version - The most current Operator version is 2.6.3. The Operator version number only indicates the version of the currently installed Operator. Because the Red Hat OpenShift service Mesh Operator supports multiple versions of the service Mesh control plane, the version of the Operator does not determine the version of your deployed serviceMeshControlPlane
resources.
Upgrading to the latest Operator version automatically applies patch updates, but does not automatically upgrade your service Mesh control plane to the latest minor version. |
serviceMeshControlPlane version - The serviceMeshControlPlane
version determines what version of Red Hat OpenShift service Mesh you are using. The value of the spec.version
field in the serviceMeshControlPlane
resource controls the architecture and configuration settings that are used to install and deploy Red Hat OpenShift service Mesh. When you create the service Mesh control plane you can set the version in one of two ways:
To configure in the Form View, select the version from the Control Plane Version menu.
To configure in the YAML View, set the value for spec.version
in the YAML file.
Operator Lifecycle Manager (OLM) does not manage service Mesh control plane upgrades, so the version number for your Operator and serviceMeshControlPlane
(SMCP) may not match, unless you have manually upgraded your SMCP.
The maistra.io/
label or annotation should not be used on a user-created custom resource, because it indicates that the resource was generated by and should be managed by the Red Hat OpenShift service Mesh Operator.
During the upgrade, the Operator makes changes, including deleting or replacing files, to resources that include the following labels or annotations that indicate that the resource is managed by the Operator. |
Before upgrading check for user-created custom resources that include the following labels or annotations:
maistra.io/
AND the app.kubernetes.io/managed-by
label set to maistra-istio-operator
(Red Hat OpenShift service Mesh)
kiali.io/
(Kiali)
jaegertracing.io/
(Red Hat OpenShift distributed tracing platform (Jaeger))
logging.openshift.io/
(Red Hat Elasticsearch)
Before upgrading, check your user-created custom resources for labels or annotations that indicate they are Operator managed. Remove the label or annotation from custom resources that you do not want to be managed by the Operator.
When upgrading to version 2.0, the Operator only deletes resources with these labels in the same namespace as the SMCP.
When upgrading to version 2.1, the Operator deletes resources with these labels in all namespaces.
Known issues that may affect your upgrade include:
When upgrading an Operator, custom configurations for Jaeger or Kiali might be reverted. Before upgrading an Operator, note any custom configuration settings for the Jaeger or Kiali objects in the service Mesh production deployment so that you can recreate them.
Red Hat OpenShift service Mesh does not support the use of EnvoyFilter
configuration except where explicitly documented. This is due to tight coupling with the underlying Envoy APIs, meaning that backward compatibility cannot be maintained. If you are using Envoy Filters, and the configuration generated by Istio has changed due to the lastest version of Envoy introduced by upgrading your serviceMeshControlPlane
, that has the potential to break any EnvoyFilter
you may have implemented.
OSSM-1505 serviceMeshExtension
does not work with Red Hat OpenShift service on AWS version 4.11. Because serviceMeshExtension
has been deprecated in Red Hat OpenShift service Mesh 2.2, this known issue will not be fixed and you must migrate your extensions to WasmPluging
OSSM-1396 If a gateway resource contains the spec.externalIPs
setting, rather than being recreated when the serviceMeshControlPlane
is updated, the gateway is removed and never recreated.
OSSM-1052 When configuring a service ExternalIP
for the ingressgateway in the service Mesh control plane, the service is not created. The schema for the SMCP is missing the parameter for the service.
Workaround: Disable the gateway creation in the SMCP spec and manage the gateway deployment entirely manually (including service, Role and RoleBinding).
In order to keep your service Mesh patched with the latest security fixes, bug fixes, and software updates, you must keep your Operators updated. You initiate patch updates by upgrading your Operators.
The version of the Operator does not determine the version of your service mesh. The version of your deployed service Mesh control plane determines your version of service Mesh. |
Because the Red Hat OpenShift service Mesh Operator supports multiple versions of the service Mesh control plane, updating the Red Hat OpenShift service Mesh Operator does not update the spec.version
value of your deployed serviceMeshControlPlane
. Also note that the spec.version
value is a two digit number, for example 2.2, and that patch updates, for example 2.2.1, are not reflected in the SMCP version value.
Operator Lifecycle Manager (OLM) controls the installation, upgrade, and role-based access control (RBAC) of Operators in a cluster. The OLM runs by default in Red Hat OpenShift service on AWS. OLM queries for available Operators as well as upgrades for installed Operators.
Whether or not you have to take action to upgrade your Operators depends on the settings you selected when installing them. When you installed each of your Operators, you selected an Update Channel and an Approval Strategy. The combination of these two settings determine when and how your Operators are updated.
Versioned channel | "Stable" or "Preview" Channel | |
---|---|---|
Automatic |
Automatically updates the Operator for minor and patch releases for that version only. Will not automatically update to the next major version (that is, from version 2.0 to 3.0). Manual change to Operator subscription required to update to the next major version. |
Automatically updates Operator for all major, minor, and patch releases. |
Manual |
Manual updates required for minor and patch releases for the specified version. Manual change to Operator subscription required to update to the next major version. |
Manual updates required for all major, minor, and patch releases. |
When you update your Red Hat OpenShift service Mesh Operator the Operator Lifecycle Manager (OLM) removes the old Operator pod and starts a new pod. Once the new Operator pod starts, the reconciliation process checks the serviceMeshControlPlane
(SMCP), and if there are updated container images available for any of the service Mesh control plane components, it replaces those service Mesh control plane pods with ones that use the new container images.
When you upgrade the Kiali and Red Hat OpenShift distributed tracing platform (Jaeger) Operators, the OLM reconciliation process scans the cluster and upgrades the managed instances to the version of the new Operator. For example, if you update the Red Hat OpenShift distributed tracing platform (Jaeger) Operator from version 1.30.2 to version 1.34.1, the Operator scans for running instances of distributed tracing platform (Jaeger) and upgrades them to 1.34.1 as well.
To stay on a particular patch version of Red Hat OpenShift service Mesh, you would need to disable automatic updates and remain on that specific version of the Operator.
You must manually update the control plane for minor and major releases. The community Istio project recommends canary upgrades, Red Hat OpenShift service Mesh only supports in-place upgrades. Red Hat OpenShift service Mesh requires that you upgrade from each minor release to the next minor release in sequence. For example, you must upgrade from version 2.0 to version 2.1, and then upgrade to version 2.2. You cannot update from Red Hat OpenShift service Mesh 2.0 to 2.2 directly.
When you upgrade the service mesh control plane, all Operator managed resources, for example gateways, are also upgraded.
Although you can deploy multiple versions of the control plane in the same cluster, Red Hat OpenShift service Mesh does not support canary upgrades of the service mesh. That is, you can have different SCMP resources with different values for spec.version
, but they cannot be managing the same mesh.
For more information about migrating your extensions, refer to Migrating from serviceMeshExtension to WasmPlugin resources.
This release disables Red Hat OpenShift distributed tracing platform (Jaeger) by default for new instances of the serviceMeshControlPlane
resource.
When updating existing instances of the serviceMeshControlPlane
resource to Red Hat OpenShift service Mesh version 2.6, distributed tracing platform (Jaeger) remains enabled by default.
Red Hat OpenShift service Mesh 2.6 is the last release that includes support for Red Hat OpenShift distributed tracing platform (Jaeger) and OpenShift Elasticsearch Operator. Both distributed tracing platform (Jaeger) and OpenShift Elasticsearch Operator will be removed in the next release. If you are currently using distributed tracing platform (Jaeger) and OpenShift Elasticsearch Operator, you must migrate to Red Hat OpenShift distributed tracing platform (Tempo) and Red Hat build of OpenTelemetry.
The default setting for Istio OpenShift Routing (IOR) has changed. The setting is now disabled by default.
You can use IOR by setting the enabled
field to true
in the spec.gateways.openshiftRoute
specification of the serviceMeshControlPlane
resource.
apiVersion: maistra.io/v2
kind: serviceMeshControlPlane
spec:
gateways:
openshiftRoute:
enabled: true
For consistency across deployments, Istio now configures the concurrency
parameter based on the CPU limit allocated to the proxy container. For example, a limit of 2500m would set the concurrency
parameter to 3. If you set the concurrency
parameter to a value, Istio uses that value to configure how many threads the proxy runs instead of using the CPU limit.
Previously, the default setting for the parameter was 2.
Upgrading the service Mesh control plane from version 2.3 to 2.4 introduces the following behavioral changes:
Support for Istio OpenShift Routing (IOR) has been deprecated. IOR functionality is still enabled, but it will be removed in a future release.
The following cipher suites are no longer supported, and were removed from the list of ciphers used in client and server side TLS negotiations.
ECDHE-ECDSA-AES128-SHA
ECDHE-RSA-AES128-SHA
AES128-GCM-SHA256
AES128-SHA
ECDHE-ECDSA-AES256-SHA
ECDHE-RSA-AES256-SHA
AES256-GCM-SHA384
AES256-SHA
Applications that require access to services that use one of these cipher suites will fail to connect when the proxy initiates a TLS connection.
Upgrading the service Mesh control plane from version 2.2 to 2.3 introduces the following behavioral changes:
This release requires use of the WasmPlugin
API. Support for the serviceMeshExtension
API, which was deprecated in 2.2, has now been removed. If you attempt to upgrade while using the serviceMeshExtension
API, then the upgrade fails.
Upgrading the service Mesh control plane from version 2.1 to 2.2 introduces the following behavioral changes:
The istio-node
DaemonSet is renamed to istio-cni-node
to match the name in upstream Istio.
Istio 1.10 updated Envoy to send traffic to the application container using eth0
rather than lo
by default.
This release adds support for the WasmPlugin
API and deprecates the serviceMeshExtension
API.
Upgrading the service Mesh control plane from version 2.0 to 2.1 introduces the following architectural and behavioral changes.
Mixer has been completely removed in Red Hat OpenShift service Mesh 2.1. Upgrading from a Red Hat OpenShift service Mesh 2.0.x release to 2.1 will be blocked if Mixer is enabled.
If you see the following message when upgrading from v2.0 to v2.1, update the existing Mixer
type to Istiod
type in the existing Control Plane spec before you update the .spec.version
field:
An error occurred
admission webhook smcp.validation.maistra.io denied the request: [support for policy.type "Mixer" and policy.Mixer options have been removed in v2.1, please use another alternative, support for telemetry.type "Mixer" and telemetry.Mixer options have been removed in v2.1, please use another alternative]”
For example:
apiVersion: maistra.io/v2
kind: serviceMeshControlPlane
spec:
policy:
type: Istiod
telemetry:
type: Istiod
version: v2.6
AuthorizationPolicy
updates:
With the PROXY protocol, if you are using ipBlocks
and notIpBlocks
to specify remote IP addresses, update the configuration to use remoteIpBlocks
and notRemoteIpBlocks
instead.
Added support for nested JSON Web Token (JWT) claims.
EnvoyFilter
breaking changes>
Must use typed_config
xDS v2 is no longer supported
Deprecated filter names
Older versions of proxies may report 503 status codes when receiving 1xx or 204 status codes from newer proxies.
To upgrade Red Hat OpenShift service Mesh, you must update the version field of the Red Hat OpenShift service Mesh serviceMeshControlPlane
v2 resource. Then, once it is configured and applied, restart the application pods to update each sidecar proxy and its configuration.
You are running Red Hat OpenShift service on AWS 4.9 or later.
You have the latest Red Hat OpenShift service Mesh Operator.
Switch to the project that contains your serviceMeshControlPlane
resource. In this example, istio-system
is the name of the service Mesh control plane project.
$ oc project istio-system
Check your v2 serviceMeshControlPlane
resource configuration to verify it is valid.
Run the following command to view your serviceMeshControlPlane
resource as a v2 resource.
$ oc get smcp -o yaml
Back up your service Mesh control plane configuration. |
Update the .spec.version
field and apply the configuration.
For example:
apiVersion: maistra.io/v2
kind: serviceMeshControlPlane
metadata:
name: basic
spec:
version: v2.6
Alternatively, instead of using the command line, you can use the web console to edit the service Mesh control plane. In the Red Hat OpenShift service on AWS web console, click Project and select the project name you just entered.
Click Operators → Installed Operators.
Find your serviceMeshControlPlane
instance.
Select YAML view and update text of the YAML file, as shown in the previous example.
Click Save.
Upgrading from version 1.1 to 2.0 requires manual steps that migrate your workloads and application to a new instance of Red Hat OpenShift service Mesh running the new version.
You must upgrade to Red Hat OpenShift service on AWS 4.7. before you upgrade to Red Hat OpenShift service Mesh 2.0.
You must have Red Hat OpenShift service Mesh version 2.0 operator. If you selected the automatic upgrade path, the operator automatically downloads the latest information. However, there are steps you must take to use the features in Red Hat OpenShift service Mesh version 2.0.
To upgrade Red Hat OpenShift service Mesh, you must create an instance of Red Hat OpenShift service Mesh serviceMeshControlPlane
v2 resource in a new namespace. Then, once it is configured, move your microservice applications and workloads from your old mesh to the new service mesh.
Check your v1 serviceMeshControlPlane
resource configuration to make sure it is valid.
Run the following command to view your serviceMeshControlPlane
resource as a v2 resource.
$ oc get smcp -o yaml
Check the spec.techPreview.errored.message
field in the output for information about any invalid fields.
If there are invalid fields in your v1 resource, the resource is not reconciled and cannot be edited as a v2 resource. All updates to v2 fields will be overridden by the original v1 settings. To fix the invalid fields, you can replace, patch, or edit the v1 version of the resource. You can also delete the resource without fixing it. After the resource has been fixed, it can be reconciled, and you can to modify or view the v2 version of the resource.
To fix the resource by editing a file, use oc get
to retrieve the resource, edit the text file locally, and replace the resource with the file you edited.
$ oc get smcp.v1.maistra.io <smcp_name> > smcp-resource.yaml
#Edit the smcp-resource.yaml file.
$ oc replace -f smcp-resource.yaml
To fix the resource using patching, use oc patch
.
$ oc patch smcp.v1.maistra.io <smcp_name> --type json --patch '[{"op": "replace","path":"/spec/path/to/bad/setting","value":"corrected-value"}]'
To fix the resource by editing with command line tools, use oc edit
.
$ oc edit smcp.v1.maistra.io <smcp_name>
Back up your service Mesh control plane configuration. Switch to the project that contains your serviceMeshControlPlane
resource. In this example, istio-system
is the name of the service Mesh control plane project.
$ oc project istio-system
Enter the following command to retrieve the current configuration. Your <smcp_name> is specified in the metadata of your serviceMeshControlPlane
resource, for example basic-install
or full-install
.
$ oc get servicemeshcontrolplanes.v1.maistra.io <smcp_name> -o yaml > <smcp_name>.v1.yaml
Convert your serviceMeshControlPlane
to a v2 control plane version that contains information about your configuration as a starting point.
$ oc get smcp <smcp_name> -o yaml > <smcp_name>.v2.yaml
Create a project. In the Red Hat OpenShift service on AWS console Project menu, click New Project
and enter a name for your project, istio-system-upgrade
, for example. Or, you can run this command from the CLI.
$ oc new-project istio-system-upgrade
Update the metadata.namespace
field in your v2 serviceMeshControlPlane
with your new project name. In this example, use istio-system-upgrade
.
Update the version
field from 1.1 to 2.0 or remove it in your v2 serviceMeshControlPlane
.
Create a serviceMeshControlPlane
in the new namespace. On the command line, run the following command to deploy the control plane with the v2 version of the serviceMeshControlPlane
that you retrieved. In this example, replace `<smcp_name.v2> `with the path to your file.
$ oc create -n istio-system-upgrade -f <smcp_name>.v2.yaml
Alternatively, you can use the console to create the service Mesh control plane. In the Red Hat OpenShift service on AWS web console, click Project. Then, select the project name you just entered.
Click Operators → Installed Operators.
Click Create serviceMeshControlPlane.
Select YAML view and paste text of the YAML file you retrieved into the field. Check that the apiVersion
field is set to maistra.io/v2
and modify the metadata.namespace
field to use the new namespace, for example istio-system-upgrade
.
Click Create.
The serviceMeshControlPlane
resource has been changed for Red Hat OpenShift service Mesh version 2.0. After you created a v2 version of the serviceMeshControlPlane
resource, modify it to take advantage of the new features and to fit your deployment. Consider the following changes to the specification and behavior of Red Hat OpenShift service Mesh 2.0 as you are modifying your serviceMeshControlPlane
resource. You can also refer to the Red Hat OpenShift service Mesh 2.0 product documentation for new information to features you use. The v2 resource must be used for Red Hat OpenShift service Mesh 2.0 installations.
The architectural units used by previous versions have been replaced by Istiod. In 2.0 the service Mesh control plane components Mixer, Pilot, Citadel, Galley, and the sidecar injector functionality have been combined into a single component, Istiod.
Although Mixer is no longer supported as a control plane component, Mixer policy and telemetry plugins are now supported through WASM extensions in Istiod. Mixer can be enabled for policy and telemetry if you need to integrate legacy Mixer plugins.
Secret Discovery service (SDS) is used to distribute certificates and keys to sidecars directly from Istiod. In Red Hat OpenShift service Mesh version 1.1, secrets were generated by Citadel, which were used by the proxies to retrieve their client certificates and keys.
The following annotations are no longer supported in v2.0. If you are using one of these annotations, you must update your workload before moving it to a v2.0 service Mesh control plane.
sidecar.maistra.io/proxyCPULimit
has been replaced with sidecar.istio.io/proxyCPULimit
. If you were using sidecar.maistra.io
annotations on your workloads, you must modify those workloads to use sidecar.istio.io
equivalents instead.
sidecar.maistra.io/proxyMemoryLimit
has been replaced with sidecar.istio.io/proxyMemoryLimit
sidecar.istio.io/discoveryAddress
is no longer supported. Also, the default discovery address has moved from pilot.<control_plane_namespace>.svc:15010
(or port 15011, if mtls is enabled) to istiod-<smcp_name>.<control_plane_namespace>.svc:15012
.
The health status port is no longer configurable and is hard-coded to 15021. * If you were defining a custom status port, for example, status.sidecar.istio.io/port
, you must remove the override before moving the workload to a v2.0 service Mesh control plane. Readiness checks can still be disabled by setting the status port to 0
.
Kubernetes Secret resources are no longer used to distribute client certificates for sidecars. Certificates are now distributed through Istiod’s SDS service. If you were relying on mounted secrets, they are longer available for workloads in v2.0 service Mesh control planes.
Some features in Red Hat OpenShift service Mesh 2.0 work differently than they did in previous versions.
The readiness port on gateways has moved from 15020
to 15021
.
The target host visibility includes Virtualservice, as well as serviceEntry resources. It includes any restrictions applied through Sidecar resources.
Automatic mutual TLS is enabled by default. Proxy to proxy communication is automatically configured to use mTLS, regardless of global PeerAuthentication policies in place.
Secure connections are always used when proxies communicate with the service Mesh control plane regardless of spec.security.controlPlane.mtls
setting. The spec.security.controlPlane.mtls
setting is only used when configuring connections for Mixer telemetry or policy.
Policy resources must be migrated to new resource types for use with v2.0 service Mesh control planes, PeerAuthentication and RequestAuthentication. Depending on the specific configuration in your Policy resource, you may have to configure multiple resources to achieve the same effect.
Mutual TLS enforcement is accomplished using the security.istio.io/v1beta1
PeerAuthentication resource. The legacy spec.peers.mtls.mode
field maps directly to the new resource’s spec.mtls.mode
field. Selection criteria has changed from specifying a service name in spec.targets[x].name
to a label selector in spec.selector.matchLabels
. In PeerAuthentication, the labels must match the selector on the services named in the targets list. Any port-specific settings will need to be mapped into spec.portLevelMtls
.
Additional authentication methods specified in spec.origins
, must be mapped into a security.istio.io/v1beta1
RequestAuthentication resource. spec.selector.matchLabels
must be configured similarly to the same field on PeerAuthentication. Configuration specific to JWT principals from spec.origins.jwt
items map to similar fields in spec.rules
items.
spec.origins[x].jwt.triggerRules
specified in the Policy must be mapped into one or more security.istio.io/v1beta1
AuthorizationPolicy resources. Any spec.selector.labels
must be configured similarly to the same field on RequestAuthentication.
spec.origins[x].jwt.triggerRules.excludedPaths
must be mapped into an AuthorizationPolicy whose spec.action is set to ALLOW, with spec.rules[x].to.operation.path
entries matching the excluded paths.
spec.origins[x].jwt.triggerRules.includedPaths
must be mapped into a separate AuthorizationPolicy whose spec.action
is set to ALLOW
, with spec.rules[x].to.operation.path
entries matching the included paths, and spec.rules.[x].from.source.requestPrincipals
entries that align with the specified spec.origins[x].jwt.issuer
in the Policy resource.
serviceMeshPolicy was configured automatically for the service Mesh control plane through the spec.istio.global.mtls.enabled
in the v1 resource or spec.security.dataPlane.mtls
in the v2 resource setting. For v2 control planes, a functionally equivalent PeerAuthentication resource is created during installation. This feature is deprecated in Red Hat OpenShift service Mesh version 2.0
These resources were replaced by the security.istio.io/v1beta1
AuthorizationPolicy resource.
Mimicking RbacConfig behavior requires writing a default AuthorizationPolicy whose settings depend on the spec.mode specified in the RbacConfig.
When spec.mode
is set to OFF
, no resource is required as the default policy is ALLOW, unless an AuthorizationPolicy applies to the request.
When spec.mode
is set to ON, set spec: {}
. You must create AuthorizationPolicy policies for all services in the mesh.
spec.mode
is set to ON_WITH_INCLUSION
, must create an AuthorizationPolicy with spec: {}
in each included namespace. Inclusion of individual services is not supported by AuthorizationPolicy. However, as soon as any AuthorizationPolicy is created that applies to the workloads for the service, all other requests not explicitly allowed will be denied.
When spec.mode
is set to ON_WITH_EXCLUSION
, it is not supported by AuthorizationPolicy. A global DENY policy can be created, but an AuthorizationPolicy must be created for every workload in the mesh because there is no allow-all policy that can be applied to either a namespace or a workload.
AuthorizationPolicy includes configuration for both the selector to which the configuration applies, which is similar to the function serviceRoleBinding provides and the rules which should be applied, which is similar to the function serviceRole provides.
This resource is replaced by using a security.istio.io/v1beta1
AuthorizationPolicy resource with an empty spec.selector in the service Mesh control plane’s namespace. This policy will be the default authorization policy applied to all workloads in the mesh. For specific migration details, see RbacConfig above.
Mixer components are disabled by default in version 2.0. If you rely on Mixer plugins for your workload, you must configure your version 2.0 serviceMeshControlPlane
to include the Mixer components.
To enable the Mixer policy components, add the following snippet to your serviceMeshControlPlane
.
spec:
policy:
type: Mixer
To enable the Mixer telemetry components, add the following snippet to your serviceMeshControlPlane
.
spec:
telemetry:
type: Mixer
Legacy mixer plugins can also be migrated to WASM and integrated using the new serviceMeshExtension (maistra.io/v1alpha1) custom resource.
Built-in WASM filters included in the upstream Istio distribution are not available in Red Hat OpenShift service Mesh 2.0.
When using mTLS with workload specific PeerAuthentication policies, a corresponding DestinationRule is required to allow traffic if the workload policy differs from the namespace/global policy.
Auto mTLS is enabled by default, but can be disabled by setting spec.security.dataPlane.automtls
to false in the serviceMeshControlPlane
resource. When disabling auto mTLS, DestinationRules may be required for proper communication between services. For example, setting PeerAuthentication to STRICT
for one namespace may prevent services in other namespaces from accessing them, unless a DestinationRule configures TLS mode for the services in the namespace.
For information about mTLS, see Enabling mutual Transport Layer Security (mTLS)
To disable mTLS For productpage service in the bookinfo sample application, your Policy resource was configured the following way for Red Hat OpenShift service Mesh v1.1.
apiVersion: authentication.istio.io/v1alpha1
kind: Policy
metadata:
name: productpage-mTLS-disable
namespace: <namespace>
spec:
targets:
- name: productpage
To disable mTLS For productpage service in the bookinfo sample application, use the following example to configure your PeerAuthentication resource for Red Hat OpenShift service Mesh v2.0.
apiVersion: security.istio.io/v1beta1
kind: PeerAuthentication
metadata:
name: productpage-mTLS-disable
namespace: <namespace>
spec:
mtls:
mode: DISABLE
selector:
matchLabels:
# this should match the selector for the "productpage" service
app: productpage
To enable mTLS With JWT authentication for the productpage
service in the bookinfo sample application, your Policy resource was configured the following way for Red Hat OpenShift service Mesh v1.1.
apiVersion: authentication.istio.io/v1alpha1
kind: Policy
metadata:
name: productpage-mTLS-with-JWT
namespace: <namespace>
spec:
targets:
- name: productpage
ports:
- number: 9000
peers:
- mtls:
origins:
- jwt:
issuer: "https://securetoken.google.com"
audiences:
- "productpage"
jwksUri: "https://www.googleapis.com/oauth2/v1/certs"
jwtHeaders:
- "x-goog-iap-jwt-assertion"
triggerRules:
- excludedPaths:
- exact: /health_check
principalBinding: USE_ORIGIN
To enable mTLS With JWT authentication for the productpage service in the bookinfo sample application, use the following example to configure your PeerAuthentication resource for Red Hat OpenShift service Mesh v2.0.
#require mtls for productpage:9000
apiVersion: security.istio.io/v1beta1
kind: PeerAuthentication
metadata:
name: productpage-mTLS-with-JWT
namespace: <namespace>
spec:
selector:
matchLabels:
# this should match the selector for the "productpage" service
app: productpage
portLevelMtls:
9000:
mode: STRICT
---
#JWT authentication for productpage
apiVersion: security.istio.io/v1beta1
kind: RequestAuthentication
metadata:
name: productpage-mTLS-with-JWT
namespace: <namespace>
spec:
selector:
matchLabels:
# this should match the selector for the "productpage" service
app: productpage
jwtRules:
- issuer: "https://securetoken.google.com"
audiences:
- "productpage"
jwksUri: "https://www.googleapis.com/oauth2/v1/certs"
fromHeaders:
- name: "x-goog-iap-jwt-assertion"
---
#Require JWT token to access product page service from
#any client to all paths except /health_check
apiVersion: security.istio.io/v1beta1
kind: AuthorizationPolicy
metadata:
name: productpage-mTLS-with-JWT
namespace: <namespace>
spec:
action: ALLOW
selector:
matchLabels:
# this should match the selector for the "productpage" service
app: productpage
rules:
- to: # require JWT token to access all other paths
- operation:
notPaths:
- /health_check
from:
- source:
# if using principalBinding: USE_PEER in the Policy,
# then use principals, e.g.
# principals:
# - “*”
requestPrincipals:
- “*”
- to: # no JWT token required to access health_check
- operation:
paths:
- /health_check
You can configure the following items with these configuration recipes.
Mutual TLS for data plane communication is configured through spec.security.dataPlane.mtls
in the serviceMeshControlPlane
resource, which is false
by default.
Istiod manages client certificates and private keys used by service proxies. By default, Istiod uses a self-signed certificate for signing, but you can configure a custom certificate and private key. For more information about how to configure signing keys, see Adding an external certificate authority key and certificate
Tracing is configured in spec.tracing
. Currently, the only type of tracer that is supported is Jaeger
. Sampling is a scaled integer representing 0.01% increments, for example, 1 is 0.01% and 10000 is 100%. The tracing implementation and sampling rate can be specified:
spec:
tracing:
sampling: 100 # 1%
type: Jaeger
Jaeger is configured in the addons
section of the serviceMeshControlPlane
resource.
spec:
addons:
jaeger:
name: jaeger
install:
storage:
type: Memory # or Elasticsearch for production mode
memory:
maxTraces: 100000
elasticsearch: # the following values only apply if storage:type:=Elasticsearch
storage: # specific storageclass configuration for the Jaeger Elasticsearch (optional)
size: "100G"
storageClassName: "storageclass"
nodeCount: 3
redundancyPolicy: SingleRedundancy
runtime:
components:
tracing.jaeger: {} # general Jaeger specific runtime configuration (optional)
tracing.jaeger.elasticsearch: #runtime configuration for Jaeger Elasticsearch deployment (optional)
container:
resources:
requests:
memory: "1Gi"
cpu: "500m"
limits:
memory: "1Gi"
The Jaeger installation can be customized with the install
field. Container configuration, such as resource limits is configured in spec.runtime.components.jaeger
related fields. If a Jaeger resource matching the value of spec.addons.jaeger.name
exists, the service Mesh control plane will be configured to use the existing installation. Use an existing Jaeger resource to fully customize your Jaeger installation.
Kiali and Grafana are configured under the addons
section of the serviceMeshControlPlane
resource.
spec:
addons:
grafana:
enabled: true
install: {} # customize install
kiali:
enabled: true
name: kiali
install: {} # customize install
The Grafana and Kiali installations can be customized through their respective install
fields. Container customization, such as resource limits, is configured in spec.runtime.components.kiali
and spec.runtime.components.grafana
. If an existing Kiali resource matching the value of name exists, the service Mesh control plane configures the Kiali resource for use with the control plane. Some fields in the Kiali resource are overridden, such as the accessible_namespaces
list, as well as the endpoints for Grafana, Prometheus, and tracing. Use an existing resource to fully customize your Kiali installation.
Resources are configured under spec.runtime.<component>
. The following component names are supported.
Component | Description | Versions supported |
---|---|---|
security |
Citadel container |
v1.0/1.1 |
galley |
Galley container |
v1.0/1.1 |
pilot |
Pilot/Istiod container |
v1.0/1.1/2.0 |
mixer |
istio-telemetry and istio-policy containers |
v1.0/1.1 |
|
istio-policy container |
v2.0 |
|
istio-telemetry container |
v2.0 |
|
oauth-proxy container used with various addons |
v1.0/1.1/2.0 |
|
sidecar injector webhook container |
v1.0/1.1 |
|
general Jaeger container - not all settings may be applied. Complete customization of Jaeger installation is supported by specifying an existing Jaeger resource in the service Mesh control plane configuration. |
v1.0/1.1/2.0 |
|
settings specific to Jaeger agent |
v1.0/1.1/2.0 |
|
settings specific to Jaeger allInOne |
v1.0/1.1/2.0 |
|
settings specific to Jaeger collector |
v1.0/1.1/2.0 |
|
settings specific to Jaeger elasticsearch deployment |
v1.0/1.1/2.0 |
|
settings specific to Jaeger query |
v1.0/1.1/2.0 |
prometheus |
prometheus container |
v1.0/1.1/2.0 |
kiali |
Kiali container - complete customization of Kiali installation is supported by specifying an existing Kiali resource in the service Mesh control plane configuration. |
v1.0/1.1/2.0 |
grafana |
Grafana container |
v1.0/1.1/2.0 |
3scale |
3scale container |
v1.0/1.1/2.0 |
|
WASM extensions cacher container |
v2.0 - tech preview |
Some components support resource limiting and scheduling. For more information, see Performance and scalability.
Your data plane will still function after you have upgraded the control plane. But in order to apply updates to the Envoy proxy and any changes to the proxy configuration, you must restart your application pods and workloads.
To complete the migration, restart all of the application pods in the mesh to upgrade the Envoy sidecar proxies and their configuration.
To perform a rolling update of a deployment use the following command:
$ oc rollout restart <deployment>
You must perform a rolling update for all applications that make up the mesh.