The main difference between a multi-tenant installation and a cluster-wide installation is the scope of privileges used by the control plane deployments, for example, Galley and Pilot. The components no longer use cluster-scoped Role Based Access Control (RBAC) resource ClusterRoleBinding
, but rely on project-scoped RoleBinding
.
Every project in the members
list will have a RoleBinding
for each service account associated with a control plane deployment and each control plane deployment will only watch those member projects. Each member project has a maistra.io/member-of
label added to it, where the member-of
value is the project containing the control plane installation.
Red Hat OpenShift Service Mesh configures each member project to ensure network access between itself, the control plane, and other member projects. The exact configuration differs depending on how OpenShift software-defined networking (SDN) is configured. See About OpenShift SDN for additional details.
If the OpenShift Container Platform cluster is configured to use the SDN plug-in:
-
NetworkPolicy
: Red Hat OpenShift Service Mesh creates a NetworkPolicy
resource in each member project allowing ingress to all pods from the other members and the control plane. If you remove a member from Service Mesh, this NetworkPolicy
resource is deleted from the project.
|
This also restricts ingress to only member projects. If ingress from non-member projects is required, you need to create a NetworkPolicy to allow that traffic through.
|
-
Multitenant: Red Hat OpenShift Service Mesh joins the NetNamespace
for each member project to the NetNamespace
of the control plane project (the equivalent of running oc adm pod-network join-projects --to control-plane-project member-project
). If you remove a member from the Service Mesh, its NetNamespace
is isolated from the control plane (the equivalent of running oc adm pod-network isolate-projects member-project
).
-
Subnet: No additional configuration is performed.