This is a cache of https://docs.openshift.com/container-platform/4.9/post_installation_configuration/preparing-for-users.html. It is a snapshot of the page at 2024-11-27T18:18:43.793+0000.
Preparing for users | Post-installation configuration | OpenShift Container Platform 4.9
×

Understanding identity provider configuration

The OpenShift Container Platform control plane includes a built-in OAuth server. Developers and administrators obtain OAuth access tokens to authenticate themselves to the API.

As an administrator, you can configure OAuth to specify an identity provider after you install your cluster.

About identity providers in OpenShift Container Platform

By default, only a kubeadmin user exists on your cluster. To specify an identity provider, you must create a custom resource (CR) that describes that identity provider and add it to the cluster.

OpenShift Container Platform user names containing /, :, and % are not supported.

Supported identity providers

You can configure the following types of identity providers:

Identity provider Description

Configure the htpasswd identity provider to validate user names and passwords against a flat file generated using htpasswd.

Configure the keystone identity provider to integrate your OpenShift Container Platform cluster with Keystone to enable shared authentication with an OpenStack Keystone v3 server configured to store users in an internal database.

Configure the ldap identity provider to validate user names and passwords against an LDAPv3 server, using simple bind authentication.

Configure a basic-authentication identity provider for users to log in to OpenShift Container Platform with credentials validated against a remote identity provider. Basic authentication is a generic backend integration mechanism.

Configure a request-header identity provider to identify users from request header values, such as X-Remote-User. It is typically used in combination with an authenticating proxy, which sets the request header value.

Configure a github identity provider to validate user names and passwords against GitHub or GitHub Enterprise’s OAuth authentication server.

Configure a gitlab identity provider to use GitLab.com or any other GitLab instance as an identity provider.

Configure a google identity provider using Google’s OpenID Connect integration.

Configure an oidc identity provider to integrate with an OpenID Connect identity provider using an Authorization Code Flow.

After you define an identity provider, you can use RBAC to define and apply permissions.

Identity provider parameters

The following parameters are common to all identity providers:

Parameter Description

name

The provider name is prefixed to provider user names to form an identity name.

mappingMethod

Defines how new identities are mapped to users when they log in. Enter one of the following values:

claim

The default value. Provisions a user with the identity’s preferred user name. Fails if a user with that user name is already mapped to another identity.

lookup

Looks up an existing identity, user identity mapping, and user, but does not automatically provision users or identities. This allows cluster administrators to set up identities and users manually, or using an external process. Using this method requires you to manually provision users.

generate

Provisions a user with the identity’s preferred user name. If a user with the preferred user name is already mapped to an existing identity, a unique user name is generated. For example, myuser2. This method should not be used in combination with external processes that require exact matches between OpenShift Container Platform user names and identity provider user names, such as LDAP group sync.

add

Provisions a user with the identity’s preferred user name. If a user with that user name already exists, the identity is mapped to the existing user, adding to any existing identity mappings for the user. Required when multiple identity providers are configured that identify the same set of users and map to the same user names.

When adding or changing identity providers, you can map identities from the new provider to existing users by setting the mappingMethod parameter to add.

Sample identity provider CR

The following custom resource (CR) shows the parameters and default values that you use to configure an identity provider. This example uses the htpasswd identity provider.

Sample identity provider CR
apiVersion: config.openshift.io/v1
kind: OAuth
metadata:
  name: cluster
spec:
  identityProviders:
  - name: my_identity_provider (1)
    mappingMethod: claim (2)
    type: HTPasswd
    htpasswd:
      fileData:
        name: htpass-secret (3)
1 This provider name is prefixed to provider user names to form an identity name.
2 Controls how mappings are established between this provider’s identities and User objects.
3 An existing secret containing a file generated using htpasswd.

Using RBAC to define and apply permissions

Understand and apply role-based access control.

RBAC overview

Role-based access control (RBAC) objects determine whether a user is allowed to perform a given action within a project.

Cluster administrators can use the cluster roles and bindings to control who has various access levels to the OpenShift Container Platform platform itself and all projects.

Developers can use local roles and bindings to control who has access to their projects. Note that authorization is a separate step from authentication, which is more about determining the identity of who is taking the action.

Authorization is managed using:

Authorization object Description

Rules

Sets of permitted verbs on a set of objects. For example, whether a user or service account can create pods.

Roles

Collections of rules. You can associate, or bind, users and groups to multiple roles.

Bindings

Associations between users and/or groups with a role.

There are two levels of RBAC roles and bindings that control authorization:

RBAC level Description

Cluster RBAC

Roles and bindings that are applicable across all projects. Cluster roles exist cluster-wide, and cluster role bindings can reference only cluster roles.

Local RBAC

Roles and bindings that are scoped to a given project. While local roles exist only in a single project, local role bindings can reference both cluster and local roles.

A cluster role binding is a binding that exists at the cluster level. A role binding exists at the project level. The cluster role view must be bound to a user using a local role binding for that user to view the project. Create local roles only if a cluster role does not provide the set of permissions needed for a particular situation.

This two-level hierarchy allows reuse across multiple projects through the cluster roles while allowing customization inside of individual projects through local roles.

During evaluation, both the cluster role bindings and the local role bindings are used. For example:

  1. Cluster-wide "allow" rules are checked.

  2. Locally-bound "allow" rules are checked.

  3. Deny by default.

Default cluster roles

OpenShift Container Platform includes a set of default cluster roles that you can bind to users and groups cluster-wide or locally.

It is not recommended to manually modify the default cluster roles. Modifications to these system roles can prevent a cluster from functioning properly.

Default cluster role Description

admin

A project manager. If used in a local binding, an admin has rights to view any resource in the project and modify any resource in the project except for quota.

basic-user

A user that can get basic information about projects and users.

cluster-admin

A super-user that can perform any action in any project. When bound to a user with a local binding, they have full control over quota and every action on every resource in the project.

cluster-status

A user that can get basic cluster status information.

cluster-reader

A user that can get or view most of the objects but cannot modify them.

edit

A user that can modify most objects in a project but does not have the power to view or modify roles or bindings.

self-provisioner

A user that can create their own projects.

view

A user who cannot make any modifications, but can see most objects in a project. They cannot view or modify roles or bindings.

Be mindful of the difference between local and cluster bindings. For example, if you bind the cluster-admin role to a user by using a local role binding, it might appear that this user has the privileges of a cluster administrator. This is not the case. Binding the cluster-admin to a user in a project grants super administrator privileges for only that project to the user. That user has the permissions of the cluster role admin, plus a few additional permissions like the ability to edit rate limits, for that project. This binding can be confusing via the web console UI, which does not list cluster role bindings that are bound to true cluster administrators. However, it does list local role bindings that you can use to locally bind cluster-admin.

The relationships between cluster roles, local roles, cluster role bindings, local role bindings, users, groups and service accounts are illustrated below.

OpenShift Container Platform RBAC

The get pods/exec, get pods/*, and get * rules grant execution privileges when they are applied to a role. Apply the principle of least privilege and assign only the minimal RBAC rights required for users and agents. For more information, see RBAC rules allow execution privileges.

Evaluating authorization

OpenShift Container Platform evaluates authorization by using:

Identity

The user name and list of groups that the user belongs to.

Action

The action you perform. In most cases, this consists of:

  • Project: The project you access. A project is a Kubernetes namespace with additional annotations that allows a community of users to organize and manage their content in isolation from other communities.

  • Verb : The action itself: get, list, create, update, delete, deletecollection, or watch.

  • Resource name: The API endpoint that you access.

Bindings

The full list of bindings, the associations between users or groups with a role.

OpenShift Container Platform evaluates authorization by using the following steps:

  1. The identity and the project-scoped action is used to find all bindings that apply to the user or their groups.

  2. Bindings are used to locate all the roles that apply.

  3. Roles are used to find all the rules that apply.

  4. The action is checked against each rule to find a match.

  5. If no matching rule is found, the action is then denied by default.

Remember that users and groups can be associated with, or bound to, multiple roles at the same time.

Project administrators can use the CLI to view local roles and bindings, including a matrix of the verbs and resources each are associated with.

The cluster role bound to the project administrator is limited in a project through a local binding. It is not bound cluster-wide like the cluster roles granted to the cluster-admin or system:admin.

Cluster roles are roles defined at the cluster level but can be bound either at the cluster level or at the project level.

Cluster role aggregation

The default admin, edit, view, and cluster-reader cluster roles support cluster role aggregation, where the cluster rules for each role are dynamically updated as new rules are created. This feature is relevant only if you extend the Kubernetes API by creating custom resources.

Projects and namespaces

A Kubernetes namespace provides a mechanism to scope resources in a cluster. The Kubernetes documentation has more information on namespaces.

Namespaces provide a unique scope for:

  • Named resources to avoid basic naming collisions.

  • Delegated management authority to trusted users.

  • The ability to limit community resource consumption.

Most objects in the system are scoped by namespace, but some are excepted and have no namespace, including nodes and users.

A project is a Kubernetes namespace with additional annotations and is the central vehicle by which access to resources for regular users is managed. A project allows a community of users to organize and manage their content in isolation from other communities. Users must be given access to projects by administrators, or if allowed to create projects, automatically have access to their own projects.

Projects can have a separate name, displayName, and description.

  • The mandatory name is a unique identifier for the project and is most visible when using the CLI tools or API. The maximum name length is 63 characters.

  • The optional displayName is how the project is displayed in the web console (defaults to name).

  • The optional description can be a more detailed description of the project and is also visible in the web console.

Each project scopes its own set of:

Object Description

Objects

Pods, services, replication controllers, etc.

Policies

Rules for which users can or cannot perform actions on objects.

Constraints

Quotas for each kind of object that can be limited.

Service accounts

Service accounts act automatically with designated access to objects in the project.

Cluster administrators can create projects and delegate administrative rights for the project to any member of the user community. Cluster administrators can also allow developers to create their own projects.

Developers and administrators can interact with projects by using the CLI or the web console.

Default projects

OpenShift Container Platform comes with a number of default projects, and projects starting with openshift- are the most essential to users. These projects host master components that run as pods and other infrastructure components. The pods created in these namespaces that have a critical pod annotation are considered critical, and the have guaranteed admission by kubelet. Pods created for master components in these namespaces are already marked as critical.

You cannot assign an SCC to pods created in one of the default namespaces: default, kube-system, kube-public, openshift-node, openshift-infra, and openshift. You cannot use these namespaces for running pods or services.

Viewing cluster roles and bindings

You can use the oc CLI to view cluster roles and bindings by using the oc describe command.

Prerequisites
  • Install the oc CLI.

  • Obtain permission to view the cluster roles and bindings.

Users with the cluster-admin default cluster role bound cluster-wide can perform any action on any resource, including viewing cluster roles and bindings.

Procedure
  1. To view the cluster roles and their associated rule sets:

    $ oc describe clusterrole.rbac
    Example output
    Name:         admin
    Labels:       kubernetes.io/bootstrapping=rbac-defaults
    Annotations:  rbac.authorization.kubernetes.io/autoupdate: true
    PolicyRule:
      Resources                                                  Non-Resource URLs  Resource Names  Verbs
      ---------                                                  -----------------  --------------  -----
      .packages.apps.redhat.com                                  []                 []              [* create update patch delete get list watch]
      imagestreams                                               []                 []              [create delete deletecollection get list patch update watch create get list watch]
      imagestreams.image.openshift.io                            []                 []              [create delete deletecollection get list patch update watch create get list watch]
      secrets                                                    []                 []              [create delete deletecollection get list patch update watch get list watch create delete deletecollection patch update]
      buildconfigs/webhooks                                      []                 []              [create delete deletecollection get list patch update watch get list watch]
      buildconfigs                                               []                 []              [create delete deletecollection get list patch update watch get list watch]
      buildlogs                                                  []                 []              [create delete deletecollection get list patch update watch get list watch]
      deploymentconfigs/scale                                    []                 []              [create delete deletecollection get list patch update watch get list watch]
      deploymentconfigs                                          []                 []              [create delete deletecollection get list patch update watch get list watch]
      imagestreamimages                                          []                 []              [create delete deletecollection get list patch update watch get list watch]
      imagestreammappings                                        []                 []              [create delete deletecollection get list patch update watch get list watch]
      imagestreamtags                                            []                 []              [create delete deletecollection get list patch update watch get list watch]
      processedtemplates                                         []                 []              [create delete deletecollection get list patch update watch get list watch]
      routes                                                     []                 []              [create delete deletecollection get list patch update watch get list watch]
      templateconfigs                                            []                 []              [create delete deletecollection get list patch update watch get list watch]
      templateinstances                                          []                 []              [create delete deletecollection get list patch update watch get list watch]
      templates                                                  []                 []              [create delete deletecollection get list patch update watch get list watch]
      deploymentconfigs.apps.openshift.io/scale                  []                 []              [create delete deletecollection get list patch update watch get list watch]
      deploymentconfigs.apps.openshift.io                        []                 []              [create delete deletecollection get list patch update watch get list watch]
      buildconfigs.build.openshift.io/webhooks                   []                 []              [create delete deletecollection get list patch update watch get list watch]
      buildconfigs.build.openshift.io                            []                 []              [create delete deletecollection get list patch update watch get list watch]
      buildlogs.build.openshift.io                               []                 []              [create delete deletecollection get list patch update watch get list watch]
      imagestreamimages.image.openshift.io                       []                 []              [create delete deletecollection get list patch update watch get list watch]
      imagestreammappings.image.openshift.io                     []                 []              [create delete deletecollection get list patch update watch get list watch]
      imagestreamtags.image.openshift.io                         []                 []              [create delete deletecollection get list patch update watch get list watch]
      routes.route.openshift.io                                  []                 []              [create delete deletecollection get list patch update watch get list watch]
      processedtemplates.template.openshift.io                   []                 []              [create delete deletecollection get list patch update watch get list watch]
      templateconfigs.template.openshift.io                      []                 []              [create delete deletecollection get list patch update watch get list watch]
      templateinstances.template.openshift.io                    []                 []              [create delete deletecollection get list patch update watch get list watch]
      templates.template.openshift.io                            []                 []              [create delete deletecollection get list patch update watch get list watch]
      serviceaccounts                                            []                 []              [create delete deletecollection get list patch update watch impersonate create delete deletecollection patch update get list watch]
      imagestreams/secrets                                       []                 []              [create delete deletecollection get list patch update watch]
      rolebindings                                               []                 []              [create delete deletecollection get list patch update watch]
      roles                                                      []                 []              [create delete deletecollection get list patch update watch]
      rolebindings.authorization.openshift.io                    []                 []              [create delete deletecollection get list patch update watch]
      roles.authorization.openshift.io                           []                 []              [create delete deletecollection get list patch update watch]
      imagestreams.image.openshift.io/secrets                    []                 []              [create delete deletecollection get list patch update watch]
      rolebindings.rbac.authorization.k8s.io                     []                 []              [create delete deletecollection get list patch update watch]
      roles.rbac.authorization.k8s.io                            []                 []              [create delete deletecollection get list patch update watch]
      networkpolicies.extensions                                 []                 []              [create delete deletecollection patch update create delete deletecollection get list patch update watch get list watch]
      networkpolicies.networking.k8s.io                          []                 []              [create delete deletecollection patch update create delete deletecollection get list patch update watch get list watch]
      configmaps                                                 []                 []              [create delete deletecollection patch update get list watch]
      endpoints                                                  []                 []              [create delete deletecollection patch update get list watch]
      persistentvolumeclaims                                     []                 []              [create delete deletecollection patch update get list watch]
      pods                                                       []                 []              [create delete deletecollection patch update get list watch]
      replicationcontrollers/scale                               []                 []              [create delete deletecollection patch update get list watch]
      replicationcontrollers                                     []                 []              [create delete deletecollection patch update get list watch]
      services                                                   []                 []              [create delete deletecollection patch update get list watch]
      daemonsets.apps                                            []                 []              [create delete deletecollection patch update get list watch]
      deployments.apps/scale                                     []                 []              [create delete deletecollection patch update get list watch]
      deployments.apps                                           []                 []              [create delete deletecollection patch update get list watch]
      replicasets.apps/scale                                     []                 []              [create delete deletecollection patch update get list watch]
      replicasets.apps                                           []                 []              [create delete deletecollection patch update get list watch]
      statefulsets.apps/scale                                    []                 []              [create delete deletecollection patch update get list watch]
      statefulsets.apps                                          []                 []              [create delete deletecollection patch update get list watch]
      horizontalpodautoscalers.autoscaling                       []                 []              [create delete deletecollection patch update get list watch]
      cronjobs.batch                                             []                 []              [create delete deletecollection patch update get list watch]
      jobs.batch                                                 []                 []              [create delete deletecollection patch update get list watch]
      daemonsets.extensions                                      []                 []              [create delete deletecollection patch update get list watch]
      deployments.extensions/scale                               []                 []              [create delete deletecollection patch update get list watch]
      deployments.extensions                                     []                 []              [create delete deletecollection patch update get list watch]
      ingresses.extensions                                       []                 []              [create delete deletecollection patch update get list watch]
      replicasets.extensions/scale                               []                 []              [create delete deletecollection patch update get list watch]
      replicasets.extensions                                     []                 []              [create delete deletecollection patch update get list watch]
      replicationcontrollers.extensions/scale                    []                 []              [create delete deletecollection patch update get list watch]
      poddisruptionbudgets.policy                                []                 []              [create delete deletecollection patch update get list watch]
      deployments.apps/rollback                                  []                 []              [create delete deletecollection patch update]
      deployments.extensions/rollback                            []                 []              [create delete deletecollection patch update]
      catalogsources.operators.coreos.com                        []                 []              [create update patch delete get list watch]
      clusterserviceversions.operators.coreos.com                []                 []              [create update patch delete get list watch]
      installplans.operators.coreos.com                          []                 []              [create update patch delete get list watch]
      packagemanifests.operators.coreos.com                      []                 []              [create update patch delete get list watch]
      subscriptions.operators.coreos.com                         []                 []              [create update patch delete get list watch]
      buildconfigs/instantiate                                   []                 []              [create]
      buildconfigs/instantiatebinary                             []                 []              [create]
      builds/clone                                               []                 []              [create]
      deploymentconfigrollbacks                                  []                 []              [create]
      deploymentconfigs/instantiate                              []                 []              [create]
      deploymentconfigs/rollback                                 []                 []              [create]
      imagestreamimports                                         []                 []              [create]
      localresourceaccessreviews                                 []                 []              [create]
      localsubjectaccessreviews                                  []                 []              [create]
      podsecuritypolicyreviews                                   []                 []              [create]
      podsecuritypolicyselfsubjectreviews                        []                 []              [create]
      podsecuritypolicysubjectreviews                            []                 []              [create]
      resourceaccessreviews                                      []                 []              [create]
      routes/custom-host                                         []                 []              [create]
      subjectaccessreviews                                       []                 []              [create]
      subjectrulesreviews                                        []                 []              [create]
      deploymentconfigrollbacks.apps.openshift.io                []                 []              [create]
      deploymentconfigs.apps.openshift.io/instantiate            []                 []              [create]
      deploymentconfigs.apps.openshift.io/rollback               []                 []              [create]
      localsubjectaccessreviews.authorization.k8s.io             []                 []              [create]
      localresourceaccessreviews.authorization.openshift.io      []                 []              [create]
      localsubjectaccessreviews.authorization.openshift.io       []                 []              [create]
      resourceaccessreviews.authorization.openshift.io           []                 []              [create]
      subjectaccessreviews.authorization.openshift.io            []                 []              [create]
      subjectrulesreviews.authorization.openshift.io             []                 []              [create]
      buildconfigs.build.openshift.io/instantiate                []                 []              [create]
      buildconfigs.build.openshift.io/instantiatebinary          []                 []              [create]
      builds.build.openshift.io/clone                            []                 []              [create]
      imagestreamimports.image.openshift.io                      []                 []              [create]
      routes.route.openshift.io/custom-host                      []                 []              [create]
      podsecuritypolicyreviews.security.openshift.io             []                 []              [create]
      podsecuritypolicyselfsubjectreviews.security.openshift.io  []                 []              [create]
      podsecuritypolicysubjectreviews.security.openshift.io      []                 []              [create]
      jenkins.build.openshift.io                                 []                 []              [edit view view admin edit view]
      builds                                                     []                 []              [get create delete deletecollection get list patch update watch get list watch]
      builds.build.openshift.io                                  []                 []              [get create delete deletecollection get list patch update watch get list watch]
      projects                                                   []                 []              [get delete get delete get patch update]
      projects.project.openshift.io                              []                 []              [get delete get delete get patch update]
      namespaces                                                 []                 []              [get get list watch]
      pods/attach                                                []                 []              [get list watch create delete deletecollection patch update]
      pods/exec                                                  []                 []              [get list watch create delete deletecollection patch update]
      pods/portforward                                           []                 []              [get list watch create delete deletecollection patch update]
      pods/proxy                                                 []                 []              [get list watch create delete deletecollection patch update]
      services/proxy                                             []                 []              [get list watch create delete deletecollection patch update]
      routes/status                                              []                 []              [get list watch update]
      routes.route.openshift.io/status                           []                 []              [get list watch update]
      appliedclusterresourcequotas                               []                 []              [get list watch]
      bindings                                                   []                 []              [get list watch]
      builds/log                                                 []                 []              [get list watch]
      deploymentconfigs/log                                      []                 []              [get list watch]
      deploymentconfigs/status                                   []                 []              [get list watch]
      events                                                     []                 []              [get list watch]
      imagestreams/status                                        []                 []              [get list watch]
      limitranges                                                []                 []              [get list watch]
      namespaces/status                                          []                 []              [get list watch]
      pods/log                                                   []                 []              [get list watch]
      pods/status                                                []                 []              [get list watch]
      replicationcontrollers/status                              []                 []              [get list watch]
      resourcequotas/status                                      []                 []              [get list watch]
      resourcequotas                                             []                 []              [get list watch]
      resourcequotausages                                        []                 []              [get list watch]
      rolebindingrestrictions                                    []                 []              [get list watch]
      deploymentconfigs.apps.openshift.io/log                    []                 []              [get list watch]
      deploymentconfigs.apps.openshift.io/status                 []                 []              [get list watch]
      controllerrevisions.apps                                   []                 []              [get list watch]
      rolebindingrestrictions.authorization.openshift.io         []                 []              [get list watch]
      builds.build.openshift.io/log                              []                 []              [get list watch]
      imagestreams.image.openshift.io/status                     []                 []              [get list watch]
      appliedclusterresourcequotas.quota.openshift.io            []                 []              [get list watch]
      imagestreams/layers                                        []                 []              [get update get]
      imagestreams.image.openshift.io/layers                     []                 []              [get update get]
      builds/details                                             []                 []              [update]
      builds.build.openshift.io/details                          []                 []              [update]
    
    
    Name:         basic-user
    Labels:       <none>
    Annotations:  openshift.io/description: A user that can get basic information about projects.
    	              rbac.authorization.kubernetes.io/autoupdate: true
    PolicyRule:
    	Resources                                           Non-Resource URLs  Resource Names  Verbs
    	  ---------                                           -----------------  --------------  -----
    	  selfsubjectrulesreviews                             []                 []              [create]
    	  selfsubjectaccessreviews.authorization.k8s.io       []                 []              [create]
    	  selfsubjectrulesreviews.authorization.openshift.io  []                 []              [create]
    	  clusterroles.rbac.authorization.k8s.io              []                 []              [get list watch]
    	  clusterroles                                        []                 []              [get list]
    	  clusterroles.authorization.openshift.io             []                 []              [get list]
    	  storageclasses.storage.k8s.io                       []                 []              [get list]
    	  users                                               []                 [~]             [get]
    	  users.user.openshift.io                             []                 [~]             [get]
    	  projects                                            []                 []              [list watch]
    	  projects.project.openshift.io                       []                 []              [list watch]
    	  projectrequests                                     []                 []              [list]
    	  projectrequests.project.openshift.io                []                 []              [list]
    
    Name:         cluster-admin
    Labels:       kubernetes.io/bootstrapping=rbac-defaults
    Annotations:  rbac.authorization.kubernetes.io/autoupdate: true
    PolicyRule:
    Resources  Non-Resource URLs  Resource Names  Verbs
    ---------  -----------------  --------------  -----
    *.*        []                 []              [*]
               [*]                []              [*]
    
    ...
  2. To view the current set of cluster role bindings, which shows the users and groups that are bound to various roles:

    $ oc describe clusterrolebinding.rbac
    Example output
    Name:         alertmanager-main
    Labels:       <none>
    Annotations:  <none>
    Role:
      Kind:  ClusterRole
      Name:  alertmanager-main
    Subjects:
      Kind            Name               Namespace
      ----            ----               ---------
      ServiceAccount  alertmanager-main  openshift-monitoring
    
    
    Name:         basic-users
    Labels:       <none>
    Annotations:  rbac.authorization.kubernetes.io/autoupdate: true
    Role:
      Kind:  ClusterRole
      Name:  basic-user
    Subjects:
      Kind   Name                  Namespace
      ----   ----                  ---------
      Group  system:authenticated
    
    
    Name:         cloud-credential-operator-rolebinding
    Labels:       <none>
    Annotations:  <none>
    Role:
      Kind:  ClusterRole
      Name:  cloud-credential-operator-role
    Subjects:
      Kind            Name     Namespace
      ----            ----     ---------
      ServiceAccount  default  openshift-cloud-credential-operator
    
    
    Name:         cluster-admin
    Labels:       kubernetes.io/bootstrapping=rbac-defaults
    Annotations:  rbac.authorization.kubernetes.io/autoupdate: true
    Role:
      Kind:  ClusterRole
      Name:  cluster-admin
    Subjects:
      Kind   Name            Namespace
      ----   ----            ---------
      Group  system:masters
    
    
    Name:         cluster-admins
    Labels:       <none>
    Annotations:  rbac.authorization.kubernetes.io/autoupdate: true
    Role:
      Kind:  ClusterRole
      Name:  cluster-admin
    Subjects:
      Kind   Name                   Namespace
      ----   ----                   ---------
      Group  system:cluster-admins
      User   system:admin
    
    
    Name:         cluster-api-manager-rolebinding
    Labels:       <none>
    Annotations:  <none>
    Role:
      Kind:  ClusterRole
      Name:  cluster-api-manager-role
    Subjects:
      Kind            Name     Namespace
      ----            ----     ---------
      ServiceAccount  default  openshift-machine-api
    
    ...

Viewing local roles and bindings

You can use the oc CLI to view local roles and bindings by using the oc describe command.

Prerequisites
  • Install the oc CLI.

  • Obtain permission to view the local roles and bindings:

    • Users with the cluster-admin default cluster role bound cluster-wide can perform any action on any resource, including viewing local roles and bindings.

    • Users with the admin default cluster role bound locally can view and manage roles and bindings in that project.

Procedure
  1. To view the current set of local role bindings, which show the users and groups that are bound to various roles for the current project:

    $ oc describe rolebinding.rbac
  2. To view the local role bindings for a different project, add the -n flag to the command:

    $ oc describe rolebinding.rbac -n joe-project
    Example output
    Name:         admin
    Labels:       <none>
    Annotations:  <none>
    Role:
      Kind:  ClusterRole
      Name:  admin
    Subjects:
      Kind  Name        Namespace
      ----  ----        ---------
      User  kube:admin
    
    
    Name:         system:deployers
    Labels:       <none>
    Annotations:  openshift.io/description:
                    Allows deploymentconfigs in this namespace to rollout pods in
                    this namespace.  It is auto-managed by a controller; remove
                    subjects to disa...
    Role:
      Kind:  ClusterRole
      Name:  system:deployer
    Subjects:
      Kind            Name      Namespace
      ----            ----      ---------
      ServiceAccount  deployer  joe-project
    
    
    Name:         system:image-builders
    Labels:       <none>
    Annotations:  openshift.io/description:
                    Allows builds in this namespace to push images to this
                    namespace.  It is auto-managed by a controller; remove subjects
                    to disable.
    Role:
      Kind:  ClusterRole
      Name:  system:image-builder
    Subjects:
      Kind            Name     Namespace
      ----            ----     ---------
      ServiceAccount  builder  joe-project
    
    
    Name:         system:image-pullers
    Labels:       <none>
    Annotations:  openshift.io/description:
                    Allows all pods in this namespace to pull images from this
                    namespace.  It is auto-managed by a controller; remove subjects
                    to disable.
    Role:
      Kind:  ClusterRole
      Name:  system:image-puller
    Subjects:
      Kind   Name                                Namespace
      ----   ----                                ---------
      Group  system:serviceaccounts:joe-project

Adding roles to users

You can use the oc adm administrator CLI to manage the roles and bindings.

Binding, or adding, a role to users or groups gives the user or group the access that is granted by the role. You can add and remove roles to and from users and groups using oc adm policy commands.

You can bind any of the default cluster roles to local users or groups in your project.

Procedure
  1. Add a role to a user in a specific project:

    $ oc adm policy add-role-to-user <role> <user> -n <project>

    For example, you can add the admin role to the alice user in joe project by running:

    $ oc adm policy add-role-to-user admin alice -n joe

    You can alternatively apply the following YAML to add the role to the user:

    apiVersion: rbac.authorization.k8s.io/v1
    kind: RoleBinding
    metadata:
      name: admin-0
      namespace: joe
    roleRef:
      apiGroup: rbac.authorization.k8s.io
      kind: ClusterRole
      name: admin
    subjects:
    - apiGroup: rbac.authorization.k8s.io
      kind: User
      name: alice
  2. View the local role bindings and verify the addition in the output:

    $ oc describe rolebinding.rbac -n <project>

    For example, to view the local role bindings for the joe project:

    $ oc describe rolebinding.rbac -n joe
    Example output
    
    Name:         admin
    Labels:       <none>
    Annotations:  <none>
    Role:
      Kind:  ClusterRole
      Name:  admin
    Subjects:
      Kind  Name        Namespace
      ----  ----        ---------
      User  kube:admin
    
    
    Name:         admin-0
    Labels:       <none>
    Annotations:  <none>
    Role:
      Kind:  ClusterRole
      Name:  admin
    Subjects:
      Kind  Name   Namespace
      ----  ----   ---------
      User  alice (1)
    
    
    Name:         system:deployers
    Labels:       <none>
    Annotations:  openshift.io/description:
                    Allows deploymentconfigs in this namespace to rollout pods in
                    this namespace.  It is auto-managed by a controller; remove
                    subjects to disa...
    Role:
      Kind:  ClusterRole
      Name:  system:deployer
    Subjects:
      Kind            Name      Namespace
      ----            ----      ---------
      ServiceAccount  deployer  joe
    
    
    Name:         system:image-builders
    Labels:       <none>
    Annotations:  openshift.io/description:
                    Allows builds in this namespace to push images to this
                    namespace.  It is auto-managed by a controller; remove subjects
                    to disable.
    Role:
      Kind:  ClusterRole
      Name:  system:image-builder
    Subjects:
      Kind            Name     Namespace
      ----            ----     ---------
      ServiceAccount  builder  joe
    
    
    Name:         system:image-pullers
    Labels:       <none>
    Annotations:  openshift.io/description:
                    Allows all pods in this namespace to pull images from this
                    namespace.  It is auto-managed by a controller; remove subjects
                    to disable.
    Role:
      Kind:  ClusterRole
      Name:  system:image-puller
    Subjects:
      Kind   Name                                Namespace
      ----   ----                                ---------
      Group  system:serviceaccounts:joe
    1 The alice user has been added to the admins RoleBinding.

Creating a local role

You can create a local role for a project and then bind it to a user.

Procedure
  1. To create a local role for a project, run the following command:

    $ oc create role <name> --verb=<verb> --resource=<resource> -n <project>

    In this command, specify:

    • <name>, the local role’s name

    • <verb>, a comma-separated list of the verbs to apply to the role

    • <resource>, the resources that the role applies to

    • <project>, the project name

    For example, to create a local role that allows a user to view pods in the blue project, run the following command:

    $ oc create role podview --verb=get --resource=pod -n blue
  2. To bind the new role to a user, run the following command:

    $ oc adm policy add-role-to-user podview user2 --role-namespace=blue -n blue

Creating a cluster role

You can create a cluster role.

Procedure
  1. To create a cluster role, run the following command:

    $ oc create clusterrole <name> --verb=<verb> --resource=<resource>

    In this command, specify:

    • <name>, the local role’s name

    • <verb>, a comma-separated list of the verbs to apply to the role

    • <resource>, the resources that the role applies to

    For example, to create a cluster role that allows a user to view pods, run the following command:

    $ oc create clusterrole podviewonly --verb=get --resource=pod

Local role binding commands

When you manage a user or group’s associated roles for local role bindings using the following operations, a project may be specified with the -n flag. If it is not specified, then the current project is used.

You can use the following commands for local RBAC management.

Table 1. Local role binding operations
Command Description

$ oc adm policy who-can <verb> <resource>

Indicates which users can perform an action on a resource.

$ oc adm policy add-role-to-user <role> <username>

Binds a specified role to specified users in the current project.

$ oc adm policy remove-role-from-user <role> <username>

Removes a given role from specified users in the current project.

$ oc adm policy remove-user <username>

Removes specified users and all of their roles in the current project.

$ oc adm policy add-role-to-group <role> <groupname>

Binds a given role to specified groups in the current project.

$ oc adm policy remove-role-from-group <role> <groupname>

Removes a given role from specified groups in the current project.

$ oc adm policy remove-group <groupname>

Removes specified groups and all of their roles in the current project.

Cluster role binding commands

You can also manage cluster role bindings using the following operations. The -n flag is not used for these operations because cluster role bindings use non-namespaced resources.

Table 2. Cluster role binding operations
Command Description

$ oc adm policy add-cluster-role-to-user <role> <username>

Binds a given role to specified users for all projects in the cluster.

$ oc adm policy remove-cluster-role-from-user <role> <username>

Removes a given role from specified users for all projects in the cluster.

$ oc adm policy add-cluster-role-to-group <role> <groupname>

Binds a given role to specified groups for all projects in the cluster.

$ oc adm policy remove-cluster-role-from-group <role> <groupname>

Removes a given role from specified groups for all projects in the cluster.

Creating a cluster admin

The cluster-admin role is required to perform administrator level tasks on the OpenShift Container Platform cluster, such as modifying cluster resources.

Prerequisites
  • You must have created a user to define as the cluster admin.

Procedure
  • Define the user as a cluster admin:

    $ oc adm policy add-cluster-role-to-user cluster-admin <user>

The kubeadmin user

OpenShift Container Platform creates a cluster administrator, kubeadmin, after the installation process completes.

This user has the cluster-admin role automatically applied and is treated as the root user for the cluster. The password is dynamically generated and unique to your OpenShift Container Platform environment. After installation completes the password is provided in the installation program’s output. For example:

INFO Install complete!
INFO Run 'export KUBECONFIG=<your working directory>/auth/kubeconfig' to manage the cluster with 'oc', the OpenShift CLI.
INFO The cluster is ready when 'oc login -u kubeadmin -p <provided>' succeeds (wait a few minutes).
INFO Access the OpenShift web-console here: https://console-openshift-console.apps.demo1.openshift4-beta-abcorp.com
INFO Login to the console with user: kubeadmin, password: <provided>

Removing the kubeadmin user

After you define an identity provider and create a new cluster-admin user, you can remove the kubeadmin to improve cluster security.

If you follow this procedure before another user is a cluster-admin, then OpenShift Container Platform must be reinstalled. It is not possible to undo this command.

Prerequisites
  • You must have configured at least one identity provider.

  • You must have added the cluster-admin role to a user.

  • You must be logged in as an administrator.

Procedure
  • Remove the kubeadmin secrets:

    $ oc delete secrets kubeadmin -n kube-system

Image configuration

Understand and configure image registry settings.

Image controller configuration parameters

The image.config.openshift.io/cluster resource holds cluster-wide information about how to handle images. The canonical, and only valid name is cluster. Its spec offers the following configuration parameters.

Parameters such as DisableScheduledImport, MaxImagesBulkImportedPerRepository, MaxScheduledImportsPerMinute, ScheduledImageImportMinimumIntervalSeconds, InternalRegistryHostname are not configurable.

Parameter Description

allowedRegistriesForImport

Limits the container image registries from which normal users can import images. Set this list to the registries that you trust to contain valid images, and that you want applications to be able to import from. Users with permission to create images or ImageStreamMappings from the API are not affected by this policy. Typically only cluster administrators have the appropriate permissions.

Every element of this list contains a location of the registry specified by the registry domain name.

domainName: Specifies a domain name for the registry. If the registry uses a non-standard 80 or 443 port, the port should be included in the domain name as well.

insecure: Insecure indicates whether the registry is secure or insecure. By default, if not otherwise specified, the registry is assumed to be secure.

additionalTrustedCA

A reference to a config map containing additional CAs that should be trusted during image stream import, pod image pull, openshift-image-registry pullthrough, and builds.

The namespace for this config map is openshift-config. The format of the config map is to use the registry hostname as the key, and the PEM-encoded certificate as the value, for each additional registry CA to trust.

externalRegistryHostnames

Provides the hostnames for the default external image registry. The external hostname should be set only when the image registry is exposed externally. The first value is used in publicDockerImageRepository field in image streams. The value must be in hostname[:port] format.

registrySources

Contains configuration that determines how the container runtime should treat individual registries when accessing images for builds and pods. For instance, whether or not to allow insecure access. It does not contain configuration for the internal cluster registry.

insecureRegistries: Registries which do not have a valid TLS certificate or only support HTTP connections. To specify all subdomains, add the asterisk (*) wildcard character as a prefix to the domain name. For example, *.example.com. You can specify an individual repository within a registry. For example: reg1.io/myrepo/myapp:latest.

blockedRegistries: Registries for which image pull and push actions are denied. To specify all subdomains, add the asterisk (*) wildcard character as a prefix to the domain name. For example, *.example.com. You can specify an individual repository within a registry. For example: reg1.io/myrepo/myapp:latest. All other registries are allowed.

allowedRegistries: Registries for which image pull and push actions are allowed. To specify all subdomains, add the asterisk (*) wildcard character as a prefix to the domain name. For example, *.example.com. You can specify an individual repository within a registry. For example: reg1.io/myrepo/myapp:latest. All other registries are blocked.

containerRuntimeSearchRegistries: Registries for which image pull and push actions are allowed using image short names. All other registries are blocked.

Either blockedRegistries or allowedRegistries can be set, but not both.

When the allowedRegistries parameter is defined, all registries, including registry.redhat.io and quay.io registries and the default internal image registry, are blocked unless explicitly listed. When using the parameter, to prevent pod failure, add all registries including the registry.redhat.io and quay.io registries and the internalRegistryHostname to the allowedRegistries list, as they are required by payload images within your environment. For disconnected clusters, mirror registries should also be added.

The status field of the image.config.openshift.io/cluster resource holds observed values from the cluster.

Parameter Description

internalRegistryHostname

Set by the Image Registry Operator, which controls the internalRegistryHostname. It sets the hostname for the default internal image registry. The value must be in hostname[:port] format. For backward compatibility, you can still use the OPENSHIFT_DEFAULT_REGISTRY environment variable, but this setting overrides the environment variable.

externalRegistryHostnames

Set by the Image Registry Operator, provides the external hostnames for the image registry when it is exposed externally. The first value is used in publicDockerImageRepository field in image streams. The values must be in hostname[:port] format.

Configuring image registry settings

You can configure image registry settings by editing the image.config.openshift.io/cluster custom resource (CR). The Machine Config Operator (MCO) watches the image.config.openshift.io/cluster CR for any changes to the registries and reboots the nodes when it detects changes.

Procedure
  1. Edit the image.config.openshift.io/cluster custom resource:

    $ oc edit image.config.openshift.io/cluster

    The following is an example image.config.openshift.io/cluster CR:

    apiVersion: config.openshift.io/v1
    kind: Image (1)
    metadata:
      annotations:
        release.openshift.io/create-only: "true"
      creationTimestamp: "2019-05-17T13:44:26Z"
      generation: 1
      name: cluster
      resourceVersion: "8302"
      selfLink: /apis/config.openshift.io/v1/images/cluster
      uid: e34555da-78a9-11e9-b92b-06d6c7da38dc
    spec:
      allowedRegistriesForImport: (2)
        - domainName: quay.io
          insecure: false
      additionalTrustedCA: (3)
        name: myconfigmap
      registrySources: (4)
        allowedRegistries:
        - example.com
        - quay.io
        - registry.redhat.io
        - image-registry.openshift-image-registry.svc:5000
        - reg1.io/myrepo/myapp:latest
        insecureRegistries:
        - insecure.com
    status:
      internalRegistryHostname: image-registry.openshift-image-registry.svc:5000
    1 Image: Holds cluster-wide information about how to handle images. The canonical, and only valid name is cluster.
    2 allowedRegistriesForImport: Limits the container image registries from which normal users may import images. Set this list to the registries that you trust to contain valid images, and that you want applications to be able to import from. Users with permission to create images or ImageStreamMappings from the API are not affected by this policy. Typically only cluster administrators have the appropriate permissions.
    3 additionalTrustedCA: A reference to a config map containing additional certificate authorities (CA) that are trusted during image stream import, pod image pull, openshift-image-registry pullthrough, and builds. The namespace for this config map is openshift-config. The format of the config map is to use the registry hostname as the key, and the PEM certificate as the value, for each additional registry CA to trust.
    4 registrySources: Contains configuration that determines whether the container runtime allows or blocks individual registries when accessing images for builds and pods. Either the allowedRegistries parameter or the blockedRegistries parameter can be set, but not both. You can also define whether or not to allow access to insecure registries or registries that allow registries that use image short names. This example uses the allowedRegistries parameter, which defines the registries that are allowed to be used. The insecure registry insecure.com is also allowed. The registrySources paramter does not contain configuration for the internal cluster registry.

    When the allowedRegistries parameter is defined, all registries, including the registry.redhat.io and quay.io registries and the default internal image registry, are blocked unless explicitly listed. If you use the parameter, to prevent pod failure, you must add the registry.redhat.io and quay.io registries and the internalRegistryHostname to the allowedRegistries list, as they are required by payload images within your environment. Do not add the registry.redhat.io and quay.io registries to the blockedRegistries list.

    When using the allowedRegistries, blockedRegistries, or insecureRegistries parameter, you can specify an individual repository within a registry. For example: reg1.io/myrepo/myapp:latest.

    Insecure external registries should be avoided to reduce possible security risks.

  2. To check that the changes are applied, list your nodes:

    $ oc get nodes
    Example output
    NAME                                       STATUS                     ROLES    AGE   VERSION
    ci-ln-j5cd0qt-f76d1-vfj5x-master-0         Ready                         master   98m   v1.22.1
    ci-ln-j5cd0qt-f76d1-vfj5x-master-1         Ready,SchedulingDisabled      master   99m   v1.22.1
    ci-ln-j5cd0qt-f76d1-vfj5x-master-2         Ready                         master   98m   v1.22.1
    ci-ln-j5cd0qt-f76d1-vfj5x-worker-b-nsnd4   Ready                         worker   90m   v1.22.1
    ci-ln-j5cd0qt-f76d1-vfj5x-worker-c-5z2gz   NotReady,SchedulingDisabled   worker   90m   v1.22.1
    ci-ln-j5cd0qt-f76d1-vfj5x-worker-d-stsjv   Ready                         worker   90m   v1.22.1

For more information on the allowed, blocked, and insecure registry parameters, see Configuring image registry settings.

Configuring additional trust stores for image registry access

The image.config.openshift.io/cluster custom resource can contain a reference to a config map that contains additional certificate authorities to be trusted during image registry access.

Prerequisites
  • The certificate authorities (CA) must be PEM-encoded.

Procedure

You can create a config map in the openshift-config namespace and use its name in AdditionalTrustedCA in the image.config.openshift.io custom resource to provide additional CAs that should be trusted when contacting external registries.

The config map key is the hostname of a registry with the port for which this CA is to be trusted, and the PEM certificate content is the value, for each additional registry CA to trust.

Image registry CA config map example
apiVersion: v1
kind: ConfigMap
metadata:
  name: my-registry-ca
data:
  registry.example.com: |
    -----BEGIN CERTIFICATE-----
    ...
    -----END CERTIFICATE-----
  registry-with-port.example.com..5000: | (1)
    -----BEGIN CERTIFICATE-----
    ...
    -----END CERTIFICATE-----
1 If the registry has the port, such as registry-with-port.example.com:5000, : should be replaced with ...

You can configure additional CAs with the following procedure.

  1. To configure an additional CA:

    $ oc create configmap registry-config --from-file=<external_registry_address>=ca.crt -n openshift-config
    $ oc edit image.config.openshift.io cluster
    spec:
      additionalTrustedCA:
        name: registry-config

Configuring image registry repository mirroring

Setting up container registry repository mirroring enables you to do the following:

  • Configure your OpenShift Container Platform cluster to redirect requests to pull images from a repository on a source image registry and have it resolved by a repository on a mirrored image registry.

  • Identify multiple mirrored repositories for each target repository, to make sure that if one mirror is down, another can be used.

The attributes of repository mirroring in OpenShift Container Platform include:

  • Image pulls are resilient to registry downtimes.

  • Clusters in disconnected environments can pull images from critical locations, such as quay.io, and have registries behind a company firewall provide the requested images.

  • A particular order of registries is tried when an image pull request is made, with the permanent registry typically being the last one tried.

  • The mirror information you enter is added to the /etc/containers/registries.conf file on every node in the OpenShift Container Platform cluster.

  • When a node makes a request for an image from the source repository, it tries each mirrored repository in turn until it finds the requested content. If all mirrors fail, the cluster tries the source repository. If successful, the image is pulled to the node.

Setting up repository mirroring can be done in the following ways:

  • At OpenShift Container Platform installation:

    By pulling container images needed by OpenShift Container Platform and then bringing those images behind your company’s firewall, you can install OpenShift Container Platform into a datacenter that is in a disconnected environment.

  • After OpenShift Container Platform installation:

    Even if you don’t configure mirroring during OpenShift Container Platform installation, you can do so later using the ImageContentSourcePolicy object.

The following procedure provides a post-installation mirror configuration, where you create an ImageContentSourcePolicy object that identifies:

  • The source of the container image repository you want to mirror.

  • A separate entry for each mirror repository you want to offer the content requested from the source repository.

You can only configure global pull secrets for clusters that have an ImageContentSourcePolicy object. You cannot add a pull secret to a project.

Prerequisites
  • Access to the cluster as a user with the cluster-admin role.

Procedure
  1. Configure mirrored repositories, by either:

    • Setting up a mirrored repository with Red Hat Quay, as described in Red Hat Quay Repository Mirroring. Using Red Hat Quay allows you to copy images from one repository to another and also automatically sync those repositories repeatedly over time.

    • Using a tool such as skopeo to copy images manually from the source directory to the mirrored repository.

      For example, after installing the skopeo RPM package on a Red Hat Enterprise Linux (RHEL) 7 or RHEL 8 system, use the skopeo command as shown in this example:

      $ skopeo copy \
      docker://registry.access.redhat.com/ubi8/ubi-minimal@sha256:5cfbaf45ca96806917830c183e9f37df2e913b187adb32e89fd83fa455ebaa6 \
      docker://example.io/example/ubi-minimal

      In this example, you have a container image registry that is named example.io with an image repository named example to which you want to copy the ubi8/ubi-minimal image from registry.access.redhat.com. After you create the registry, you can configure your OpenShift Container Platform cluster to redirect requests made of the source repository to the mirrored repository.

  2. Log in to your OpenShift Container Platform cluster.

  3. Create an ImageContentSourcePolicy file (for example, registryrepomirror.yaml), replacing the source and mirrors with your own registry and repository pairs and images:

    apiVersion: operator.openshift.io/v1alpha1
    kind: ImageContentSourcePolicy
    metadata:
      name: ubi8repo
    spec:
      repositoryDigestMirrors:
      - mirrors:
        - example.io/example/ubi-minimal (1)
        - example.com/example/ubi-minimal (2)
        source: registry.access.redhat.com/ubi8/ubi-minimal (3)
      - mirrors:
        - mirror.example.com/redhat
        source: registry.redhat.io/openshift4 (4)
      - mirrors:
        - mirror.example.com
        source: registry.redhat.io (5)
      - mirrors:
        - mirror.example.net/image
        source: registry.example.com/example/myimage (6)
      - mirrors:
        - mirror.example.net
        source: registry.example.com/example (7)
      - mirrors:
        - mirror.example.net/registry-example-com
        source: registry.example.com (8)
    1 Indicates the name of the image registry and repository.
    2 Indicates multiple mirror repositories for each target repository. If one mirror is down, the target repository can use another mirror.
    3 Indicates the registry and repository containing the content that is mirrored.
    4 You can configure a namespace inside a registry to use any image in that namespace. If you use a registry domain as a source, the ImageContentSourcePolicy resource is applied to all repositories from the registry.
    5 If you configure the registry name, the ImageContentSourcePolicy resource is applied to all repositories from a source registry to a mirror registry.
    6 Pulls the image mirror.example.net/image@sha256:…​.
    7 Pulls the image myimage in the source registry namespace from the mirror mirror.example.net/myimage@sha256:…​.
    8 Pulls the image registry.example.com/example/myimage from the mirror registry mirror.example.net/registry-example-com/example/myimage@sha256:…​. The ImageContentSourcePolicy resource is applied to all repositories from a source registry to a mirror registry mirror.example.net/registry-example-com.
  4. Create the new ImageContentSourcePolicy object:

    $ oc create -f registryrepomirror.yaml

    After the ImageContentSourcePolicy object is created, the new settings are deployed to each node and the cluster starts using the mirrored repository for requests to the source repository.

  5. To check that the mirrored configuration settings, are applied, do the following on one of the nodes.

    1. List your nodes:

      $ oc get node
      Example output
      NAME                           STATUS                     ROLES    AGE  VERSION
      ip-10-0-137-44.ec2.internal    Ready                      worker   7m   v1.24.0
      ip-10-0-138-148.ec2.internal   Ready                      master   11m  v1.24.0
      ip-10-0-139-122.ec2.internal   Ready                      master   11m  v1.24.0
      ip-10-0-147-35.ec2.internal    Ready                      worker   7m   v1.24.0
      ip-10-0-153-12.ec2.internal    Ready                      worker   7m   v1.24.0
      ip-10-0-154-10.ec2.internal    Ready                      master   11m  v1.24.0

      The Imagecontentsourcepolicy resource does not restart the nodes.

    2. Start the debugging process to access the node:

      $ oc debug node/ip-10-0-147-35.ec2.internal
      Example output
      Starting pod/ip-10-0-147-35ec2internal-debug ...
      To use host binaries, run `chroot /host`
    3. Change your root directory to /host:

      sh-4.2# chroot /host
    4. Check the /etc/containers/registries.conf file to make sure the changes were made:

      sh-4.2# cat /etc/containers/registries.conf
      Example output
      unqualified-search-registries = ["registry.access.redhat.com", "docker.io"]
      short-name-mode = ""
      
      [[registry]]
        prefix = ""
        location = "registry.access.redhat.com/ubi8/ubi-minimal"
        mirror-by-digest-only = true
      
        [[registry.mirror]]
          location = "example.io/example/ubi-minimal"
      
        [[registry.mirror]]
          location = "example.com/example/ubi-minimal"
      
      [[registry]]
        prefix = ""
        location = "registry.example.com"
        mirror-by-digest-only = true
      
        [[registry.mirror]]
          location = "mirror.example.net/registry-example-com"
      
      [[registry]]
        prefix = ""
        location = "registry.example.com/example"
        mirror-by-digest-only = true
      
        [[registry.mirror]]
          location = "mirror.example.net"
      
      [[registry]]
        prefix = ""
        location = "registry.example.com/example/myimage"
        mirror-by-digest-only = true
      
        [[registry.mirror]]
          location = "mirror.example.net/image"
      
      [[registry]]
        prefix = ""
        location = "registry.redhat.io"
        mirror-by-digest-only = true
      
        [[registry.mirror]]
          location = "mirror.example.com"
      
      [[registry]]
        prefix = ""
        location = "registry.redhat.io/openshift4"
        mirror-by-digest-only = true
      
        [[registry.mirror]]
          location = "mirror.example.com/redhat"
    5. Pull an image digest to the node from the source and check if it is resolved by the mirror. ImageContentSourcePolicy objects support image digests only, not image tags.

      sh-4.2# podman pull --log-level=debug registry.access.redhat.com/ubi8/ubi-minimal@sha256:5cfbaf45ca96806917830c183e9f37df2e913b187adb32e89fd83fa455ebaa6
Troubleshooting repository mirroring

If the repository mirroring procedure does not work as described, use the following information about how repository mirroring works to help troubleshoot the problem.

  • The first working mirror is used to supply the pulled image.

  • The main registry is only used if no other mirror works.

  • From the system context, the Insecure flags are used as fallback.

  • The format of the /etc/containers/registries.conf file has changed recently. It is now version 2 and in TOML format.

Populating OperatorHub from mirrored Operator catalogs

If you mirrored Operator catalogs for use with disconnected clusters, you can populate OperatorHub with the Operators from your mirrored catalogs. You can use the generated manifests from the mirroring process to create the required ImageContentSourcePolicy and CatalogSource objects.

Creating the ImageContentSourcePolicy object

After mirroring Operator catalog content to your mirror registry, create the required ImageContentSourcePolicy (ICSP) object. The ICSP object configures nodes to translate between the image references stored in Operator manifests and the mirrored registry.

Procedure
  • On a host with access to the disconnected cluster, create the ICSP by running the following command to specify the imageContentSourcePolicy.yaml file in your manifests directory:

    $ oc create -f <path/to/manifests/dir>/imageContentSourcePolicy.yaml

    where <path/to/manifests/dir> is the path to the manifests directory for your mirrored content.

    You can now create a CatalogSource object to reference your mirrored index image and Operator content.

Adding a catalog source to a cluster

Adding a catalog source to an OpenShift Container Platform cluster enables the discovery and installation of Operators for users. Cluster administrators can create a CatalogSource object that references an index image. OperatorHub uses catalog sources to populate the user interface.

Prerequisites
  • An index image built and pushed to a registry.

Procedure
  1. Create a CatalogSource object that references your index image. If you used the oc adm catalog mirror command to mirror your catalog to a target registry, you can use the generated catalogSource.yaml file in your manifests directory as a starting point.

    1. Modify the following to your specifications and save it as a catalogSource.yaml file:

      apiVersion: operators.coreos.com/v1alpha1
      kind: CatalogSource
      metadata:
        name: my-operator-catalog (1)
        namespace: openshift-marketplace (2)
      spec:
        sourceType: grpc
        image: <registry>/<namespace>/redhat-operator-index:v4.9 (3)
        displayName: My Operator Catalog
        publisher: <publisher_name> (4)
        updateStrategy:
          registryPoll: (5)
            interval: 30m
      1 If you mirrored content to local files before uploading to a registry, remove any backslash (/) characters from the metadata.name field to avoid an "invalid resource name" error when you create the object.
      2 If you want the catalog source to be available globally to users in all namespaces, specify the openshift-marketplace namespace. Otherwise, you can specify a different namespace for the catalog to be scoped and available only for that namespace.
      3 Specify your index image.
      4 Specify your name or an organization name publishing the catalog.
      5 Catalog sources can automatically check for new versions to keep up to date.
    2. Use the file to create the CatalogSource object:

      $ oc apply -f catalogSource.yaml
  2. Verify the following resources are created successfully.

    1. Check the pods:

      $ oc get pods -n openshift-marketplace
      Example output
      NAME                                    READY   STATUS    RESTARTS  AGE
      my-operator-catalog-6njx6               1/1     Running   0         28s
      marketplace-operator-d9f549946-96sgr    1/1     Running   0         26h
    2. Check the catalog source:

      $ oc get catalogsource -n openshift-marketplace
      Example output
      NAME                  DISPLAY               TYPE PUBLISHER  AGE
      my-operator-catalog   My Operator Catalog   grpc            5s
    3. Check the package manifest:

      $ oc get packagemanifest -n openshift-marketplace
      Example output
      NAME                          CATALOG               AGE
      jaeger-product                My Operator Catalog   93s

You can now install the Operators from the OperatorHub page on your OpenShift Container Platform web console.

About Operator installation with OperatorHub

OperatorHub is a user interface for discovering Operators; it works in conjunction with Operator Lifecycle Manager (OLM), which installs and manages Operators on a cluster.

As a cluster administrator, you can install an Operator from OperatorHub using the OpenShift Container Platform web console or CLI. Subscribing an Operator to one or more namespaces makes the Operator available to developers on your cluster.

During installation, you must determine the following initial settings for the Operator:

Installation Mode

Choose All namespaces on the cluster (default) to have the Operator installed on all namespaces or choose individual namespaces, if available, to only install the Operator on selected namespaces. This example chooses All namespaces…​ to make the Operator available to all users and projects.

Update Channel

If an Operator is available through multiple channels, you can choose which channel you want to subscribe to. For example, to deploy from the stable channel, if available, select it from the list.

Approval Strategy

You can choose automatic or manual updates.

If you choose automatic updates for an installed Operator, when a new version of that Operator is available in the selected channel, Operator Lifecycle Manager (OLM) automatically upgrades the running instance of your Operator without human intervention.

If you select manual updates, when a newer version of an Operator is available, OLM creates an update request. As a cluster administrator, you must then manually approve that update request to have the Operator updated to the new version.

Installing from OperatorHub using the web console

You can install and subscribe to an Operator from OperatorHub using the OpenShift Container Platform web console.

Prerequisites
  • Access to an OpenShift Container Platform cluster using an account with cluster-admin permissions.

Procedure
  1. Navigate in the web console to the Operators → OperatorHub page.

  2. Scroll or type a keyword into the Filter by keyword box to find the Operator you want. For example, type jaeger to find the Jaeger Operator.

    You can also filter options by Infrastructure Features. For example, select Disconnected if you want to see Operators that work in disconnected environments, also known as restricted network environments.

  3. Select the Operator to display additional information.

    Choosing a Community Operator warns that Red Hat does not certify Community Operators; you must acknowledge the warning before continuing.

  4. Read the information about the Operator and click Install.

  5. On the Install Operator page:

    1. Select one of the following:

      • All namespaces on the cluster (default) installs the Operator in the default openshift-operators namespace to watch and be made available to all namespaces in the cluster. This option is not always available.

      • A specific namespace on the cluster allows you to choose a specific, single namespace in which to install the Operator. The Operator will only watch and be made available for use in this single namespace.

    2. Select an Update Channel (if more than one is available).

    3. Select Automatic or Manual approval strategy, as described earlier.

  6. Click Install to make the Operator available to the selected namespaces on this OpenShift Container Platform cluster.

    1. If you selected a Manual approval strategy, the upgrade status of the subscription remains Upgrading until you review and approve the install plan.

      After approving on the Install Plan page, the subscription upgrade status moves to Up to date.

    2. If you selected an Automatic approval strategy, the upgrade status should resolve to Up to date without intervention.

  7. After the upgrade status of the subscription is Up to date, select Operators → Installed Operators to verify that the cluster service version (CSV) of the installed Operator eventually shows up. The Status should ultimately resolve to InstallSucceeded in the relevant namespace.

    For the All namespaces…​ installation mode, the status resolves to InstallSucceeded in the openshift-operators namespace, but the status is Copied if you check in other namespaces.

    If it does not:

    1. Check the logs in any pods in the openshift-operators project (or other relevant namespace if A specific namespace…​ installation mode was selected) on the Workloads → Pods page that are reporting issues to troubleshoot further.

Installing from OperatorHub using the CLI

Instead of using the OpenShift Container Platform web console, you can install an Operator from OperatorHub using the CLI. Use the oc command to create or update a Subscription object.

Prerequisites
  • Access to an OpenShift Container Platform cluster using an account with cluster-admin permissions.

  • Install the oc command to your local system.

Procedure
  1. View the list of Operators available to the cluster from OperatorHub:

    $ oc get packagemanifests -n openshift-marketplace
    Example output
    NAME                               CATALOG               AGE
    3scale-operator                    Red Hat Operators     91m
    advanced-cluster-management        Red Hat Operators     91m
    amq7-cert-manager                  Red Hat Operators     91m
    ...
    couchbase-enterprise-certified     Certified Operators   91m
    crunchy-postgres-operator          Certified Operators   91m
    mongodb-enterprise                 Certified Operators   91m
    ...
    etcd                               Community Operators   91m
    jaeger                             Community Operators   91m
    kubefed                            Community Operators   91m
    ...

    Note the catalog for your desired Operator.

  2. Inspect your desired Operator to verify its supported install modes and available channels:

    $ oc describe packagemanifests <operator_name> -n openshift-marketplace
  3. An Operator group, defined by an OperatorGroup object, selects target namespaces in which to generate required RBAC access for all Operators in the same namespace as the Operator group.

    The namespace to which you subscribe the Operator must have an Operator group that matches the install mode of the Operator, either the AllNamespaces or SingleNamespace mode. If the Operator you intend to install uses the AllNamespaces, then the openshift-operators namespace already has an appropriate Operator group in place.

    However, if the Operator uses the SingleNamespace mode and you do not already have an appropriate Operator group in place, you must create one.

    The web console version of this procedure handles the creation of the OperatorGroup and Subscription objects automatically behind the scenes for you when choosing SingleNamespace mode.

    1. Create an OperatorGroup object YAML file, for example operatorgroup.yaml:

      Example OperatorGroup object
      apiVersion: operators.coreos.com/v1
      kind: OperatorGroup
      metadata:
        name: <operatorgroup_name>
        namespace: <namespace>
      spec:
        targetNamespaces:
        - <namespace>
    2. Create the OperatorGroup object:

      $ oc apply -f operatorgroup.yaml
  4. Create a Subscription object YAML file to subscribe a namespace to an Operator, for example sub.yaml:

    Example Subscription object
    apiVersion: operators.coreos.com/v1alpha1
    kind: Subscription
    metadata:
      name: <subscription_name>
      namespace: openshift-operators (1)
    spec:
      channel: <channel_name> (2)
      name: <operator_name> (3)
      source: redhat-operators (4)
      sourceNamespace: openshift-marketplace (5)
      config:
        env: (6)
        - name: ARGS
          value: "-v=10"
        envFrom: (7)
        - secretRef:
            name: license-secret
        volumes: (8)
        - name: <volume_name>
          configMap:
            name: <configmap_name>
        volumeMounts: (9)
        - mountPath: <directory_name>
          name: <volume_name>
        tolerations: (10)
        - operator: "Exists"
        resources: (11)
          requests:
            memory: "64Mi"
            cpu: "250m"
          limits:
            memory: "128Mi"
            cpu: "500m"
        nodeSelector: (12)
          foo: bar
    1 For AllNamespaces install mode usage, specify the openshift-operators namespace. Otherwise, specify the relevant single namespace for SingleNamespace install mode usage.
    2 Name of the channel to subscribe to.
    3 Name of the Operator to subscribe to.
    4 Name of the catalog source that provides the Operator.
    5 Namespace of the catalog source. Use openshift-marketplace for the default OperatorHub catalog sources.
    6 The env parameter defines a list of Environment Variables that must exist in all containers in the pod created by OLM.
    7 The envFrom parameter defines a list of sources to populate Environment Variables in the container.
    8 The volumes parameter defines a list of Volumes that must exist on the pod created by OLM.
    9 The volumeMounts parameter defines a list of VolumeMounts that must exist in all containers in the pod created by OLM. If a volumeMount references a volume that does not exist, OLM fails to deploy the Operator.
    10 The tolerations parameter defines a list of Tolerations for the pod created by OLM.
    11 The resources parameter defines resource constraints for all the containers in the pod created by OLM.
    12 The nodeSelector parameter defines a NodeSelector for the pod created by OLM.
  5. Create the Subscription object:

    $ oc apply -f sub.yaml

    At this point, OLM is now aware of the selected Operator. A cluster service version (CSV) for the Operator should appear in the target namespace, and APIs provided by the Operator should be available for creation.

Additional resources