This is a cache of https://docs.openshift.com/container-platform/4.12/service_mesh/v1x/installing-ossm.html. It is a snapshot of the page at 2024-11-27T13:21:30.398+0000.
Installing Service Mesh - Service Mesh 1.x | Service Mesh | OpenShift Container Platform 4.12
×

You are viewing documentation for a Red Hat OpenShift Service Mesh release that is no longer supported.

Service Mesh version 1.0 and 1.1 control planes are no longer supported. For information about upgrading your service mesh control plane, see Upgrading Service Mesh.

For information about the support status of a particular Red Hat OpenShift Service Mesh release, see the Product lifecycle page.

Installing the Service Mesh involves installing the OpenShift Elasticsearch, Jaeger, Kiali and Service Mesh Operators, creating and managing a ServiceMeshControlPlane resource to deploy the control plane, and creating a ServiceMeshMemberRoll resource to specify the namespaces associated with the Service Mesh.

Mixer’s policy enforcement is disabled by default. You must enable it to run policy tasks. See Update Mixer policy enforcement for instructions on enabling Mixer policy enforcement.

Multi-tenant control plane installations are the default configuration.

The Service Mesh documentation uses istio-system as the example project, but you can deploy the service mesh to any project.

Prerequisites

The Service Mesh installation process uses the OperatorHub to install the ServiceMeshControlPlane custom resource definition within the openshift-operators project. The Red Hat OpenShift Service Mesh defines and monitors the ServiceMeshControlPlane related to the deployment, update, and deletion of the control plane.

Starting with Red Hat OpenShift Service Mesh 1.1.18.2, you must install the OpenShift Elasticsearch Operator, the Jaeger Operator, and the Kiali Operator before the Red Hat OpenShift Service Mesh Operator can install the control plane.

Installing the OpenShift Elasticsearch Operator

The default Red Hat OpenShift distributed tracing platform (Jaeger) deployment uses in-memory storage because it is designed to be installed quickly for those evaluating Red Hat OpenShift distributed tracing platform, giving demonstrations, or using Red Hat OpenShift distributed tracing platform (Jaeger) in a test environment. If you plan to use Red Hat OpenShift distributed tracing platform (Jaeger) in production, you must install and configure a persistent storage option, in this case, Elasticsearch.

Prerequisites
  • You have access to the OpenShift Container Platform web console.

  • You have access to the cluster as a user with the cluster-admin role. If you use Red Hat OpenShift Dedicated, you must have an account with the dedicated-admin role.

Do not install Community versions of the Operators. Community Operators are not supported.

If you have already installed the OpenShift Elasticsearch Operator as part of OpenShift Logging, you do not need to install the OpenShift Elasticsearch Operator again. The Red Hat OpenShift distributed tracing platform (Jaeger) Operator creates the Elasticsearch instance using the installed OpenShift Elasticsearch Operator.

Procedure
  1. Log in to the OpenShift Container Platform web console as a user with the cluster-admin role. If you use Red Hat OpenShift Dedicated, you must have an account with the dedicated-admin role.

  2. Navigate to OperatorsOperatorHub.

  3. Type Elasticsearch into the filter box to locate the OpenShift Elasticsearch Operator.

  4. Click the OpenShift Elasticsearch Operator provided by Red Hat to display information about the Operator.

  5. Click Install.

  6. On the Install Operator page, select the stable Update Channel. This automatically updates your Operator as new versions are released.

  7. Accept the default All namespaces on the cluster (default). This installs the Operator in the default openshift-operators-redhat project and makes the Operator available to all projects in the cluster.

    The Elasticsearch installation requires the openshift-operators-redhat namespace for the OpenShift Elasticsearch Operator. The other Red Hat OpenShift distributed tracing platform Operators are installed in the openshift-operators namespace.

  8. Accept the default Automatic approval strategy. By accepting the default, when a new version of this Operator is available, Operator Lifecycle Manager (OLM) automatically upgrades the running instance of your Operator without human intervention. If you select Manual updates, when a newer version of an Operator is available, OLM creates an update request. As a cluster administrator, you must then manually approve that update request to have the Operator updated to the new version.

    The Manual approval strategy requires a user with appropriate credentials to approve the Operator install and subscription process.

  9. Click Install.

  10. On the Installed Operators page, select the openshift-operators-redhat project. Wait for the InstallSucceeded status of the OpenShift Elasticsearch Operator before continuing.

Installing the Red Hat OpenShift distributed tracing platform Operator

You can install the Red Hat OpenShift distributed tracing platform Operator through the OperatorHub.

By default, the Operator is installed in the openshift-operators project.

Prerequisites
  • You have access to the OpenShift Container Platform web console.

  • You have access to the cluster as a user with the cluster-admin role. If you use Red Hat OpenShift Dedicated, you must have an account with the dedicated-admin role.

  • If you require persistent storage, you must install the OpenShift Elasticsearch Operator before installing the Red Hat OpenShift distributed tracing platform Operator.

Procedure
  1. Log in to the OpenShift Container Platform web console as a user with the cluster-admin role. If you use Red Hat OpenShift Dedicated, you must have an account with the dedicated-admin role.

  2. Navigate to OperatorsOperatorHub.

  3. Search for the Red Hat OpenShift distributed tracing platform Operator by entering distributed tracing platform in the search field.

  4. Select the Red Hat OpenShift distributed tracing platform Operator, which is provided by Red Hat, to display information about the Operator.

  5. Click Install.

  6. For the Update channel on the Install Operator page, select stable to automatically update the Operator when new versions are released.

  7. Accept the default All namespaces on the cluster (default). This installs the Operator in the default openshift-operators project and makes the Operator available to all projects in the cluster.

  8. Accept the default Automatic approval strategy.

    If you accept this default, the Operator Lifecycle Manager (OLM) automatically upgrades the running instance of this Operator when a new version of the Operator becomes available.

    If you select Manual updates, the OLM creates an update request when a new version of the Operator becomes available. To update the Operator to the new version, you must then manually approve the update request as a cluster administrator. The Manual approval strategy requires a cluster administrator to manually approve Operator installation and subscription.

  9. Click Install.

  10. Navigate to OperatorsInstalled Operators.

  11. On the Installed Operators page, select the openshift-operators project. Wait for the Succeeded status of the Red Hat OpenShift distributed tracing platform Operator before continuing.

Installing the Kiali Operator

You must install the Kiali Operator for the Red Hat OpenShift Service Mesh Operator to install the Service Mesh control plane.

Do not install Community versions of the Operators. Community Operators are not supported.

Prerequisites
  • Access to the OpenShift Container Platform web console.

Procedure
  1. Log in to the OpenShift Container Platform web console.

  2. Navigate to OperatorsOperatorHub.

  3. Type Kiali into the filter box to find the Kiali Operator.

  4. Click the Kiali Operator provided by Red Hat to display information about the Operator.

  5. Click Install.

  6. On the Operator Installation page, select the stable Update Channel.

  7. Select All namespaces on the cluster (default). This installs the Operator in the default openshift-operators project and makes the Operator available to all projects in the cluster.

  8. Select the Automatic Approval Strategy.

    The Manual approval strategy requires a user with appropriate credentials to approve the Operator install and subscription process.

  9. Click Install.

  10. The Installed Operators page displays the Kiali Operator’s installation progress.

Installing the Operators

To install Red Hat OpenShift Service Mesh, you must install the Red Hat OpenShift Service Mesh Operator. Repeat the procedure for each additional Operator you want to install.

Additional Operators include:

  • Kiali Operator provided by Red Hat

  • Tempo Operator

Deprecated additional Operators include:

Starting with Red Hat OpenShift Service Mesh 2.5, Red Hat OpenShift distributed tracing platform (Jaeger) and OpenShift Elasticsearch Operator are deprecated and will be removed in a future release. Red Hat will provide bug fixes and support for these features during the current release lifecycle, but this feature will no longer receive enhancements and will be removed. As an alternative to Red Hat OpenShift distributed tracing platform (Jaeger), you can use Red Hat OpenShift distributed tracing platform (Tempo) instead.

  • Red Hat OpenShift distributed tracing platform (Jaeger)

  • OpenShift Elasticsearch Operator

If you have already installed the OpenShift Elasticsearch Operator as part of OpenShift Logging, you do not need to install the OpenShift Elasticsearch Operator again. The Red Hat OpenShift distributed tracing platform (Jaeger) Operator creates the Elasticsearch instance using the installed OpenShift Elasticsearch Operator.

Procedure
  1. Log in to the OpenShift Container Platform web console as a user with the cluster-admin role. If you use Red Hat OpenShift Dedicated, you must have an account with the dedicated-admin role.

  2. In the OpenShift Container Platform web console, click OperatorsOperatorHub.

  3. Type the name of the Operator into the filter box and select the Red Hat version of the Operator. Community versions of the Operators are not supported.

  4. Click Install.

  5. On the Install Operator page for each Operator, accept the default settings.

  6. Click Install. Wait until the Operator installs before repeating the steps for the next Operator you want to install.

    • The Red Hat OpenShift Service Mesh Operator installs in the openshift-operators namespace and is available for all namespaces in the cluster.

    • The Kiali Operator provided by Red Hat installs in the openshift-operators namespace and is available for all namespaces in the cluster.

    • The Tempo Operator installs in the openshift-tempo-operator namespace and is available for all namespaces in the cluster.

    • The Red Hat OpenShift distributed tracing platform (Jaeger) installs in the openshift-distributed-tracing namespace and is available for all namespaces in the cluster.

      Starting with Red Hat OpenShift Service Mesh 2.5, Red Hat OpenShift distributed tracing platform (Jaeger) is deprecated and will be removed in a future release. Red Hat will provide bug fixes and support for this feature during the current release lifecycle, but this feature will no longer receive enhancements and will be removed. As an alternative to Red Hat OpenShift distributed tracing platform (Jaeger), you can use Red Hat OpenShift distributed tracing platform (Tempo) instead.

    • The OpenShift Elasticsearch Operator installs in the openshift-operators-redhat namespace and is available for all namespaces in the cluster.

      Starting with Red Hat OpenShift Service Mesh 2.5, OpenShift Elasticsearch Operator is deprecated and will be removed in a future release. Red Hat will provide bug fixes and support for this feature during the current release lifecycle, but this feature will no longer receive enhancements and will be removed.

Verification
  • After all you have installed all four Operators, click OperatorsInstalled Operators to verify that your Operators are installed.

Deploying the Red Hat OpenShift Service Mesh control plane

The ServiceMeshControlPlane resource defines the configuration to be used during installation. You can deploy the default configuration provided by Red Hat or customize the ServiceMeshControlPlane file to fit your business needs.

You can deploy the Service Mesh control plane by using the OpenShift Container Platform web console or from the command line using the oc client tool.

Deploying the control plane from the web console

Follow this procedure to deploy the Red Hat OpenShift Service Mesh control plane by using the web console. In this example, istio-system is the name of the control plane project.

Prerequisites
  • The Red Hat OpenShift Service Mesh Operator must be installed.

  • Review the instructions for how to customize the Red Hat OpenShift Service Mesh installation.

  • An account with the cluster-admin role.

Procedure
  1. Log in to the OpenShift Container Platform web console as a user with the cluster-admin role.

  2. Create a project named istio-system.

    1. Navigate to HomeProjects.

    2. Click Create Project.

    3. Enter istio-system in the Name field.

    4. Click Create.

  3. Navigate to OperatorsInstalled Operators.

  4. If necessary, select istio-system from the Project menu. You may have to wait a few moments for the Operators to be copied to the new project.

  5. Click the Red Hat OpenShift Service Mesh Operator. Under Provided APIs, the Operator provides links to create two resource types:

    • A ServiceMeshControlPlane resource

    • A ServiceMeshMemberRoll resource

  6. Under Istio Service Mesh Control Plane click Create ServiceMeshControlPlane.

  7. On the Create Service Mesh Control Plane page, modify the YAML for the default ServiceMeshControlPlane template as needed.

    For additional information about customizing the control plane, see customizing the Red Hat OpenShift Service Mesh installation. For production, you must change the default Jaeger template.

  8. Click Create to create the control plane. The Operator creates pods, services, and Service Mesh control plane components based on your configuration parameters.

  9. Click the Istio Service Mesh Control Plane tab.

  10. Click the name of the new control plane.

  11. Click the Resources tab to see the Red Hat OpenShift Service Mesh control plane resources the Operator created and configured.

Deploying the control plane from the CLI

Follow this procedure to deploy the Red Hat OpenShift Service Mesh control plane the command line.

Prerequisites
  • The Red Hat OpenShift Service Mesh Operator must be installed.

  • Review the instructions for how to customize the Red Hat OpenShift Service Mesh installation.

  • An account with the cluster-admin role.

  • Access to the OpenShift CLI (oc).

Procedure
  1. Log in to the OpenShift Container Platform CLI as a user with the cluster-admin role.

    $ oc login --username=<NAMEOFUSER> https://<HOSTNAME>:6443
  2. Create a project named istio-system.

    $ oc new-project istio-system
  3. Create a ServiceMeshControlPlane file named istio-installation.yaml using the example found in "Customize the Red Hat OpenShift Service Mesh installation". You can customize the values as needed to match your use case. For production deployments you must change the default Jaeger template.

  4. Run the following command to deploy the control plane:

    $ oc create -n istio-system -f istio-installation.yaml
  5. Execute the following command to see the status of the control plane installation.

    $ oc get smcp -n istio-system

    The installation has finished successfully when the STATUS column is ComponentsReady.

    NAME            READY   STATUS            PROFILES      VERSION   AGE
    basic-install   11/11   ComponentsReady   ["default"]   v1.1.18   4m25s
  6. Run the following command to watch the progress of the Pods during the installation process:

    $ oc get pods -n istio-system -w

    You should see output similar to the following:

    Example output
    NAME                                     READY   STATUS             RESTARTS   AGE
    grafana-7bf5764d9d-2b2f6                 2/2     Running            0          28h
    istio-citadel-576b9c5bbd-z84z4           1/1     Running            0          28h
    istio-egressgateway-5476bc4656-r4zdv     1/1     Running            0          28h
    istio-galley-7d57b47bb7-lqdxv            1/1     Running            0          28h
    istio-ingressgateway-dbb8f7f46-ct6n5     1/1     Running            0          28h
    istio-pilot-546bf69578-ccg5x             2/2     Running            0          28h
    istio-policy-77fd498655-7pvjw            2/2     Running            0          28h
    istio-sidecar-injector-df45bd899-ctxdt   1/1     Running            0          28h
    istio-telemetry-66f697d6d5-cj28l         2/2     Running            0          28h
    jaeger-896945cbc-7lqrr                   2/2     Running            0          11h
    kiali-78d9c5b87c-snjzh                   1/1     Running            0          22h
    prometheus-6dff867c97-gr2n5              2/2     Running            0          28h

For a multitenant installation, Red Hat OpenShift Service Mesh supports multiple independent control planes within the cluster. You can create reusable configurations with ServiceMeshControlPlane templates. For more information, see Creating control plane templates.

Creating the Red Hat OpenShift Service Mesh member roll

The ServiceMeshMemberRoll lists the projects that belong to the Service Mesh control plane. Only projects listed in the ServiceMeshMemberRoll are affected by the control plane. A project does not belong to a service mesh until you add it to the member roll for a particular control plane deployment.

You must create a ServiceMeshMemberRoll resource named default in the same project as the ServiceMeshControlPlane, for example istio-system.

Creating the member roll from the web console

You can add one or more projects to the Service Mesh member roll from the web console. In this example, istio-system is the name of the Service Mesh control plane project.

Prerequisites
  • An installed, verified Red Hat OpenShift Service Mesh Operator.

  • List of existing projects to add to the service mesh.

Procedure
  1. Log in to the OpenShift Container Platform web console.

  2. If you do not already have services for your mesh, or you are starting from scratch, create a project for your applications. It must be different from the project where you installed the Service Mesh control plane.

    1. Navigate to HomeProjects.

    2. Enter a name in the Name field.

    3. Click Create.

  3. Navigate to OperatorsInstalled Operators.

  4. Click the Project menu and choose the project where your ServiceMeshControlPlane resource is deployed from the list, for example istio-system.

  5. Click the Red Hat OpenShift Service Mesh Operator.

  6. Click the Istio Service Mesh Member Roll tab.

  7. Click Create ServiceMeshMemberRoll

  8. Click Members, then enter the name of your project in the Value field. You can add any number of projects, but a project can only belong to one ServiceMeshMemberRoll resource.

  9. Click Create.

Creating the member roll from the CLI

You can add a project to the ServiceMeshMemberRoll from the command line.

Prerequisites
  • An installed, verified Red Hat OpenShift Service Mesh Operator.

  • List of projects to add to the service mesh.

  • Access to the OpenShift CLI (oc).

Procedure
  1. Log in to the OpenShift Container Platform CLI.

    $ oc login --username=<NAMEOFUSER> https://<HOSTNAME>:6443
  2. If you do not already have services for your mesh, or you are starting from scratch, create a project for your applications. It must be different from the project where you installed the Service Mesh control plane.

    $ oc new-project <your-project>
  3. To add your projects as members, modify the following example YAML. You can add any number of projects, but a project can only belong to one ServiceMeshMemberRoll resource. In this example, istio-system is the name of the Service Mesh control plane project.

    Example servicemeshmemberroll-default.yaml
    apiVersion: maistra.io/v1
    kind: ServiceMeshMemberRoll
    metadata:
      name: default
      namespace: istio-system
    spec:
      members:
        # a list of projects joined into the service mesh
        - your-project-name
        - another-project-name
  4. Run the following command to upload and create the ServiceMeshMemberRoll resource in the istio-system namespace.

    $ oc create -n istio-system -f servicemeshmemberroll-default.yaml
  5. Run the following command to verify the ServiceMeshMemberRoll was created successfully.

    $ oc get smmr -n istio-system default

    The installation has finished successfully when the STATUS column is Configured.

Adding or removing projects from the service mesh

You can add or remove projects from an existing Service Mesh ServiceMeshMemberRoll resource using the web console.

  • You can add any number of projects, but a project can only belong to one ServiceMeshMemberRoll resource.

  • The ServiceMeshMemberRoll resource is deleted when its corresponding ServiceMeshControlPlane resource is deleted.

Adding or removing projects from the member roll using the web console

Prerequisites
  • An installed, verified Red Hat OpenShift Service Mesh Operator.

  • An existing ServiceMeshMemberRoll resource.

  • Name of the project with the ServiceMeshMemberRoll resource.

  • Names of the projects you want to add or remove from the mesh.

Procedure
  1. Log in to the OpenShift Container Platform web console.

  2. Navigate to OperatorsInstalled Operators.

  3. Click the Project menu and choose the project where your ServiceMeshControlPlane resource is deployed from the list, for example istio-system.

  4. Click the Red Hat OpenShift Service Mesh Operator.

  5. Click the Istio Service Mesh Member Roll tab.

  6. Click the default link.

  7. Click the YAML tab.

  8. Modify the YAML to add or remove projects as members. You can add any number of projects, but a project can only belong to one ServiceMeshMemberRoll resource.

  9. Click Save.

  10. Click Reload.

Adding or removing projects from the member roll using the CLI

You can modify an existing Service Mesh member roll using the command line.

Prerequisites
  • An installed, verified Red Hat OpenShift Service Mesh Operator.

  • An existing ServiceMeshMemberRoll resource.

  • Name of the project with the ServiceMeshMemberRoll resource.

  • Names of the projects you want to add or remove from the mesh.

  • Access to the OpenShift CLI (oc).

Procedure
  1. Log in to the OpenShift Container Platform CLI.

  2. Edit the ServiceMeshMemberRoll resource.

    $ oc edit smmr -n <controlplane-namespace>
  3. Modify the YAML to add or remove projects as members. You can add any number of projects, but a project can only belong to one ServiceMeshMemberRoll resource.

    Example servicemeshmemberroll-default.yaml
    apiVersion: maistra.io/v1
    kind: ServiceMeshMemberRoll
    metadata:
      name: default
      namespace: istio-system #control plane project
    spec:
      members:
        # a list of projects joined into the service mesh
        - your-project-name
        - another-project-name

Manual updates

If you choose to update manually, the Operator Lifecycle Manager (OLM) controls the installation, upgrade, and role-based access control (RBAC) of Operators in a cluster. OLM runs by default in OpenShift Container Platform. OLM uses CatalogSources, which use the Operator Registry API, to query for available Operators as well as upgrades for installed Operators.

  • For more information about how OpenShift Container Platform handled upgrades, refer to the Operator Lifecycle Manager documentation.

Updating sidecar proxies

In order to update the configuration for sidecar proxies the application administrator must restart the application pods.

If your deployment uses automatic sidecar injection, you can update the pod template in the deployment by adding or modifying an annotation. Run the following command to redeploy the pods:

$ oc patch deployment/<deployment> -p '{"spec":{"template":{"metadata":{"annotations":{"kubectl.kubernetes.io/restartedAt": "'`date -Iseconds`'"}}}}}'

If your deployment does not use automatic sidecar injection, you must manually update the sidecars by modifying the sidecar container image specified in the deployment or pod, and then restart the pods.

Next steps