This is a cache of https://docs.openshift.com/container-platform/3.9/admin_guide/manage_rbac.html. It is a snapshot of the page at 2024-11-27T02:58:45.233+0000.
Managing Role-based Access Control | Cluster Administration | OpenShift Container Platform 3.9
×

Overview

You can use the CLI to view RBAC resources and the administrator CLI to manage the roles and bindings.

Viewing roles and bindings

Roles can be used to grant various levels of access both cluster-wide as well as at the project-scope. Users and groups can be associated with, or bound to, multiple roles at the same time. You can view details about the roles and their bindings using the oc describe command.

Users with the cluster-admin default cluster role bound cluster-wide can perform any action on any resource. Users with the admin default cluster role bound locally can manage roles and bindings in that project.

Review a full list of verbs in the Evaluating Authorization section.

Viewing cluster roles

To view the cluster roles and their associated rule sets:

$ oc describe clusterrole.rbac
Name:		admin
Labels:		<none>
Annotations:	openshift.io/description=A user that has edit rights within the project and can change the project's membership.
		rbac.authorization.kubernetes.io/autoupdate=true
PolicyRule:
  Resources							Non-Resource URLs	Resource Names	Verbs
  ---------							-----------------	--------------	-----
  appliedclusterresourcequotas					[]			[]		[get list watch]
  appliedclusterresourcequotas.quota.openshift.io		[]			[]		[get list watch]
  bindings							[]			[]		[get list watch]
  buildconfigs							[]			[]		[create delete deletecollection get list patch update watch]
  buildconfigs.build.openshift.io				[]			[]		[create delete deletecollection get list patch update watch]
  buildconfigs/instantiate					[]			[]		[create]
  buildconfigs.build.openshift.io/instantiate			[]			[]		[create]
  buildconfigs/instantiatebinary				[]			[]		[create]
  buildconfigs.build.openshift.io/instantiatebinary		[]			[]		[create]
  buildconfigs/webhooks						[]			[]		[create delete deletecollection get list patch update watch]
  buildconfigs.build.openshift.io/webhooks			[]			[]		[create delete deletecollection get list patch update watch]
  buildlogs							[]			[]		[create delete deletecollection get list patch update watch]
  buildlogs.build.openshift.io					[]			[]		[create delete deletecollection get list patch update watch]
  builds							[]			[]		[create delete deletecollection get list patch update watch]
  builds.build.openshift.io					[]			[]		[create delete deletecollection get list patch update watch]
  builds/clone							[]			[]		[create]
  builds.build.openshift.io/clone				[]			[]		[create]
  builds/details						[]			[]		[update]
  builds.build.openshift.io/details				[]			[]		[update]
  builds/log							[]			[]		[get list watch]
  builds.build.openshift.io/log					[]			[]		[get list watch]
  configmaps							[]			[]		[create delete deletecollection get list patch update watch]
  cronjobs.batch						[]			[]		[create delete deletecollection get list patch update watch]
  daemonsets.extensions						[]			[]		[get list watch]
  deploymentconfigrollbacks					[]			[]		[create]
  deploymentconfigrollbacks.apps.openshift.io			[]			[]		[create]
  deploymentconfigs						[]			[]		[create delete deletecollection get list patch update watch]
  deploymentconfigs.apps.openshift.io				[]			[]		[create delete deletecollection get list patch update watch]
  deploymentconfigs/instantiate					[]			[]		[create]
  deploymentconfigs.apps.openshift.io/instantiate		[]			[]		[create]
  deploymentconfigs/log						[]			[]		[get list watch]
  deploymentconfigs.apps.openshift.io/log			[]			[]		[get list watch]
  deploymentconfigs/rollback					[]			[]		[create]
  deploymentconfigs.apps.openshift.io/rollback			[]			[]		[create]
  deploymentconfigs/scale					[]			[]		[create delete deletecollection get list patch update watch]
  deploymentconfigs.apps.openshift.io/scale			[]			[]		[create delete deletecollection get list patch update watch]
  deploymentconfigs/status					[]			[]		[get list watch]
  deploymentconfigs.apps.openshift.io/status			[]			[]		[get list watch]
  deployments.apps						[]			[]		[create delete deletecollection get list patch update watch]
  deployments.extensions					[]			[]		[create delete deletecollection get list patch update watch]
  deployments.extensions/rollback				[]			[]		[create delete deletecollection get list patch update watch]
  deployments.apps/scale					[]			[]		[create delete deletecollection get list patch update watch]
  deployments.extensions/scale					[]			[]		[create delete deletecollection get list patch update watch]
  deployments.apps/status					[]			[]		[create delete deletecollection get list patch update watch]
  endpoints							[]			[]		[create delete deletecollection get list patch update watch]
  events							[]			[]		[get list watch]
  horizontalpodautoscalers.autoscaling				[]			[]		[create delete deletecollection get list patch update watch]
  horizontalpodautoscalers.extensions				[]			[]		[create delete deletecollection get list patch update watch]
  imagestreamimages						[]			[]		[create delete deletecollection get list patch update watch]
  imagestreamimages.image.openshift.io				[]			[]		[create delete deletecollection get list patch update watch]
  imagestreamimports						[]			[]		[create]
  imagestreamimports.image.openshift.io				[]			[]		[create]
  imagestreammappings						[]			[]		[create delete deletecollection get list patch update watch]
  imagestreammappings.image.openshift.io			[]			[]		[create delete deletecollection get list patch update watch]
  imagestreams							[]			[]		[create delete deletecollection get list patch update watch]
  imagestreams.image.openshift.io				[]			[]		[create delete deletecollection get list patch update watch]
  imagestreams/layers						[]			[]		[get update]
  imagestreams.image.openshift.io/layers			[]			[]		[get update]
  imagestreams/secrets						[]			[]		[create delete deletecollection get list patch update watch]
  imagestreams.image.openshift.io/secrets			[]			[]		[create delete deletecollection get list patch update watch]
  imagestreams/status						[]			[]		[get list watch]
  imagestreams.image.openshift.io/status			[]			[]		[get list watch]
  imagestreamtags						[]			[]		[create delete deletecollection get list patch update watch]
  imagestreamtags.image.openshift.io				[]			[]		[create delete deletecollection get list patch update watch]
  jenkins.build.openshift.io					[]			[]		[admin edit view]
  jobs.batch							[]			[]		[create delete deletecollection get list patch update watch]
  limitranges							[]			[]		[get list watch]
  localresourceaccessreviews					[]			[]		[create]
  localresourceaccessreviews.authorization.openshift.io		[]			[]		[create]
  localsubjectaccessreviews					[]			[]		[create]
  localsubjectaccessreviews.authorization.k8s.io		[]			[]		[create]
  localsubjectaccessreviews.authorization.openshift.io		[]			[]		[create]
  namespaces							[]			[]		[get list watch]
  namespaces/status						[]			[]		[get list watch]
  networkpolicies.extensions					[]			[]		[create delete deletecollection get list patch update watch]
  persistentvolumeclaims					[]			[]		[create delete deletecollection get list patch update watch]
  pods								[]			[]		[create delete deletecollection get list patch update watch]
  pods/attach							[]			[]		[create delete deletecollection get list patch update watch]
  pods/exec							[]			[]		[create delete deletecollection get list patch update watch]
  pods/log							[]			[]		[get list watch]
  pods/portforward						[]			[]		[create delete deletecollection get list patch update watch]
  pods/proxy							[]			[]		[create delete deletecollection get list patch update watch]
  pods/status							[]			[]		[get list watch]
  podsecuritypolicyreviews					[]			[]		[create]
  podsecuritypolicyreviews.security.openshift.io		[]			[]		[create]
  podsecuritypolicyselfsubjectreviews				[]			[]		[create]
  podsecuritypolicyselfsubjectreviews.security.openshift.io	[]			[]		[create]
  podsecuritypolicysubjectreviews				[]			[]		[create]
  podsecuritypolicysubjectreviews.security.openshift.io		[]			[]		[create]
  processedtemplates						[]			[]		[create delete deletecollection get list patch update watch]
  processedtemplates.template.openshift.io			[]			[]		[create delete deletecollection get list patch update watch]
  projects							[]			[]		[delete get patch update]
  projects.project.openshift.io					[]			[]		[delete get patch update]
  replicasets.extensions					[]			[]		[create delete deletecollection get list patch update watch]
  replicasets.extensions/scale					[]			[]		[create delete deletecollection get list patch update watch]
  replicationcontrollers					[]			[]		[create delete deletecollection get list patch update watch]
  replicationcontrollers/scale					[]			[]		[create delete deletecollection get list patch update watch]
  replicationcontrollers.extensions/scale			[]			[]		[create delete deletecollection get list patch update watch]
  replicationcontrollers/status					[]			[]		[get list watch]
  resourceaccessreviews						[]			[]		[create]
  resourceaccessreviews.authorization.openshift.io		[]			[]		[create]
  resourcequotas						[]			[]		[get list watch]
  resourcequotas/status						[]			[]		[get list watch]
  resourcequotausages						[]			[]		[get list watch]
  rolebindingrestrictions					[]			[]		[get list watch]
  rolebindingrestrictions.authorization.openshift.io		[]			[]		[get list watch]
  rolebindings							[]			[]		[create delete deletecollection get list patch update watch]
  rolebindings.authorization.openshift.io			[]			[]		[create delete deletecollection get list patch update watch]
  rolebindings.rbac.authorization.k8s.io			[]			[]		[create delete deletecollection get list patch update watch]
  roles								[]			[]		[create delete deletecollection get list patch update watch]
  roles.authorization.openshift.io				[]			[]		[create delete deletecollection get list patch update watch]
  roles.rbac.authorization.k8s.io				[]			[]		[create delete deletecollection get list patch update watch]
  routes							[]			[]		[create delete deletecollection get list patch update watch]
  routes.route.openshift.io					[]			[]		[create delete deletecollection get list patch update watch]
  routes/custom-host						[]			[]		[create]
  routes.route.openshift.io/custom-host				[]			[]		[create]
  routes/status							[]			[]		[get list watch update]
  routes.route.openshift.io/status				[]			[]		[get list watch update]
  scheduledjobs.batch						[]			[]		[create delete deletecollection get list patch update watch]
  secrets							[]			[]		[create delete deletecollection get list patch update watch]
  serviceaccounts						[]			[]		[create delete deletecollection get list patch update watch impersonate]
  services							[]			[]		[create delete deletecollection get list patch update watch]
  services/proxy						[]			[]		[create delete deletecollection get list patch update watch]
  statefulsets.apps						[]			[]		[create delete deletecollection get list patch update watch]
  subjectaccessreviews						[]			[]		[create]
  subjectaccessreviews.authorization.openshift.io		[]			[]		[create]
  subjectrulesreviews						[]			[]		[create]
  subjectrulesreviews.authorization.openshift.io		[]			[]		[create]
  templateconfigs						[]			[]		[create delete deletecollection get list patch update watch]
  templateconfigs.template.openshift.io				[]			[]		[create delete deletecollection get list patch update watch]
  templateinstances						[]			[]		[create delete deletecollection get list patch update watch]
  templateinstances.template.openshift.io			[]			[]		[create delete deletecollection get list patch update watch]
  templates							[]			[]		[create delete deletecollection get list patch update watch]
  templates.template.openshift.io				[]			[]		[create delete deletecollection get list patch update watch]


Name:		basic-user
Labels:		<none>
Annotations:	openshift.io/description=A user that can get basic information about projects.
		rbac.authorization.kubernetes.io/autoupdate=true
PolicyRule:
  Resources						Non-Resource URLs	Resource Names	Verbs
  ---------						-----------------	--------------	-----
  clusterroles						[]			[]		[get list]
  clusterroles.authorization.openshift.io		[]			[]		[get list]
  clusterroles.rbac.authorization.k8s.io		[]			[]		[get list watch]
  projectrequests					[]			[]		[list]
  projectrequests.project.openshift.io			[]			[]		[list]
  projects						[]			[]		[list watch]
  projects.project.openshift.io				[]			[]		[list watch]
  selfsubjectaccessreviews.authorization.k8s.io		[]			[]		[create]
  selfsubjectrulesreviews				[]			[]		[create]
  selfsubjectrulesreviews.authorization.openshift.io	[]			[]		[create]
  storageclasses.storage.k8s.io				[]			[]		[get list]
  users							[]			[~]		[get]
  users.user.openshift.io				[]			[~]		[get]


Name:		cluster-admin
Labels:		<none>
Annotations:	authorization.openshift.io/system-only=true
		openshift.io/description=A super-user that can perform any action in the cluster. When granted to a user within a project, they have full control over quota and membership and can perform every action...
		rbac.authorization.kubernetes.io/autoupdate=true
PolicyRule:
  Resources	Non-Resource URLs	Resource Names	Verbs
  ---------	-----------------	--------------	-----
  		[*]			[]		[*]
  *.*		[]			[]		[*]


Name:		cluster-debugger
Labels:		<none>
Annotations:	authorization.openshift.io/system-only=true
		rbac.authorization.kubernetes.io/autoupdate=true
PolicyRule:
  Resources	Non-Resource URLs	Resource Names	Verbs
  ---------	-----------------	--------------	-----
  		[/debug/pprof]		[]		[get]
  		[/debug/pprof/*]	[]		[get]
  		[/metrics]		[]		[get]


Name:		cluster-reader
Labels:		<none>
Annotations:	authorization.openshift.io/system-only=true
		rbac.authorization.kubernetes.io/autoupdate=true
PolicyRule:
  Resources							Non-Resource URLs	Resource Names	Verbs
  ---------							-----------------	--------------	-----
  								[*]			[]		[get]
  apiservices.apiregistration.k8s.io				[]			[]		[get list watch]
  apiservices.apiregistration.k8s.io/status			[]			[]		[get list watch]
  appliedclusterresourcequotas					[]			[]		[get list watch]

...

Viewing cluster role bindings

To view the current set of cluster role bindings, which show the users and groups that are bound to various roles:

$ oc describe clusterrolebinding.rbac
Name:		admin
Labels:		<none>
Annotations:	rbac.authorization.kubernetes.io/autoupdate=true
Role:
  Kind:	ClusterRole
  Name:	admin
Subjects:
  Kind			Name				Namespace
  ----			----				---------
  ServiceAccount	template-instance-controller	openshift-infra


Name:		basic-users
Labels:		<none>
Annotations:	rbac.authorization.kubernetes.io/autoupdate=true
Role:
  Kind:	ClusterRole
  Name:	basic-user
Subjects:
  Kind	Name			Namespace
  ----	----			---------
  Group	system:authenticated


Name:		cluster-admin
Labels:		kubernetes.io/bootstrapping=rbac-defaults
Annotations:	rbac.authorization.kubernetes.io/autoupdate=true
Role:
  Kind:	ClusterRole
  Name:	cluster-admin
Subjects:
  Kind			Name		Namespace
  ----			----		---------
  ServiceAccount	pvinstaller	default
  Group			system:masters


Name:		cluster-admins
Labels:		<none>
Annotations:	rbac.authorization.kubernetes.io/autoupdate=true
Role:
  Kind:	ClusterRole
  Name:	cluster-admin
Subjects:
  Kind	Name			Namespace
  ----	----			---------
  Group	system:cluster-admins
  User	system:admin


Name:		cluster-readers
Labels:		<none>
Annotations:	rbac.authorization.kubernetes.io/autoupdate=true
Role:
  Kind:	ClusterRole
  Name:	cluster-reader
Subjects:
  Kind	Name			Namespace
  ----	----			---------
  Group	system:cluster-readers


Name:		cluster-status-binding
Labels:		<none>
Annotations:	rbac.authorization.kubernetes.io/autoupdate=true
Role:
  Kind:	ClusterRole
  Name:	cluster-status
Subjects:
  Kind	Name			Namespace
  ----	----			---------
  Group	system:authenticated
  Group	system:unauthenticated


Name:		registry-registry-role
Labels:		<none>
Annotations:	<none>
Role:
  Kind:	ClusterRole
  Name:	system:registry
Subjects:
  Kind			Name		Namespace
  ----			----		---------
  ServiceAccount	registry	default


Name:		router-router-role
Labels:		<none>
Annotations:	<none>
Role:
  Kind:	ClusterRole
  Name:	system:router
Subjects:
  Kind			Name	Namespace
  ----			----	---------
  ServiceAccount	router	default


Name:		self-access-reviewers
Labels:		<none>
Annotations:	rbac.authorization.kubernetes.io/autoupdate=true
Role:
  Kind:	ClusterRole
  Name:	self-access-reviewer
Subjects:
  Kind	Name			Namespace
  ----	----			---------
  Group	system:authenticated
  Group	system:unauthenticated


Name:		self-provisioners
Labels:		<none>
Annotations:	rbac.authorization.kubernetes.io/autoupdate=true
Role:
  Kind:	ClusterRole
  Name:	self-provisioner
Subjects:
  Kind	Name				Namespace
  ----	----				---------
  Group	system:authenticated:oauth


Name:		system:basic-user
Labels:		kubernetes.io/bootstrapping=rbac-defaults
Annotations:	rbac.authorization.kubernetes.io/autoupdate=true
Role:
  Kind:	ClusterRole
  Name:	system:basic-user
Subjects:
  Kind	Name			Namespace
  ----	----			---------
  Group	system:authenticated
  Group	system:unauthenticated


Name:		system:build-strategy-docker-binding
Labels:		<none>
Annotations:	rbac.authorization.kubernetes.io/autoupdate=true
Role:
  Kind:	ClusterRole
  Name:	system:build-strategy-docker
Subjects:
  Kind	Name			Namespace
  ----	----			---------
  Group	system:authenticated


Name:		system:build-strategy-jenkinspipeline-binding
Labels:		<none>
Annotations:	rbac.authorization.kubernetes.io/autoupdate=true
Role:
  Kind:	ClusterRole
  Name:	system:build-strategy-jenkinspipeline
Subjects:
  Kind	Name			Namespace
  ----	----			---------
  Group	system:authenticated


Name:		system:build-strategy-source-binding
Labels:		<none>
Annotations:	rbac.authorization.kubernetes.io/autoupdate=true
Role:
  Kind:	ClusterRole
  Name:	system:build-strategy-source
Subjects:
  Kind	Name			Namespace
  ----	----			---------
  Group	system:authenticated


Name:		system:controller:attachdetach-controller
Labels:		kubernetes.io/bootstrapping=rbac-defaults
Annotations:	rbac.authorization.kubernetes.io/autoupdate=true
Role:
  Kind:	ClusterRole
  Name:	system:controller:attachdetach-controller
Subjects:
  Kind			Name			Namespace
  ----			----			---------
  ServiceAccount	attachdetach-controller	kube-system


Name:		system:controller:certificate-controller
Labels:		kubernetes.io/bootstrapping=rbac-defaults
Annotations:	rbac.authorization.kubernetes.io/autoupdate=true
Role:
  Kind:	ClusterRole
  Name:	system:controller:certificate-controller
Subjects:
  Kind			Name			Namespace
  ----			----			---------
  ServiceAccount	certificate-controller	kube-system


Name:		system:controller:cronjob-controller
Labels:		kubernetes.io/bootstrapping=rbac-defaults
Annotations:	rbac.authorization.kubernetes.io/autoupdate=true

...

Viewing local roles and bindings

All of the default cluster roles can be bound locally to users or groups.

Custom local roles can be created.

The local role bindings are also viewable.

To view the current set of local role bindings, which show the users and groups that are bound to various roles:

$ oc describe rolebinding.rbac

By default, the current project is used when viewing local role bindings. Alternatively, a project can be specified with the -n flag. This is useful for viewing the local role bindings of another project, if the user already has the admin default cluster role in it.

$ oc describe rolebinding.rbac -n joe-project
Name:		admin
Labels:		<none>
Annotations:	<none>
Role:
  Kind:	ClusterRole
  Name:	admin
Subjects:
  Kind	Name	Namespace
  ----	----	---------
  User	joe


Name:		system:deployers
Labels:		<none>
Annotations:	<none>
Role:
  Kind:	ClusterRole
  Name:	system:deployer
Subjects:
  Kind			Name		Namespace
  ----			----		---------
  ServiceAccount	deployer	joe-project


Name:		system:image-builders
Labels:		<none>
Annotations:	<none>
Role:
  Kind:	ClusterRole
  Name:	system:image-builder
Subjects:
  Kind			Name	Namespace
  ----			----	---------
  ServiceAccount	builder	joe-project


Name:		system:image-pullers
Labels:		<none>
Annotations:	<none>
Role:
  Kind:	ClusterRole
  Name:	system:image-puller
Subjects:
  Kind	Name					Namespace
  ----	----					---------
  Group	system:serviceaccounts:joe-project

Managing role bindings

Adding, or binding, a role to users or groups gives the user or group the relevant access granted by the role. You can add and remove roles to and from users and groups using oc adm policy commands.

When managing a user or group’s associated roles for local role bindings using the following operations, a project may be specified with the -n flag. If it is not specified, then the current project is used.

Table 1. Local role binding operations
Command Description

$ oc adm policy who-can <verb> <resource>

Indicates which users can perform an action on a resource.

$ oc adm policy add-role-to-user <role> <username>

Binds a given role to specified users in the current project.

$ oc adm policy remove-role-from-user <role> <username>

Removes a given role from specified users in the current project.

$ oc adm policy remove-user <username>

Removes specified users and all of their roles in the current project.

$ oc adm policy add-role-to-group <role> <groupname>

Binds a given role to specified groups in the current project.

$ oc adm policy remove-role-from-group <role> <groupname>

Removes a given role from specified groups in the current project.

$ oc adm policy remove-group <groupname>

Removes specified groups and all of their roles in the current project.

--rolebinding-name=

Can be used with oc adm policy commands to retain the rolebinding name assigned to a user or group.

You can also manage cluster role bindings using the following operations. The -n flag is not used for these operations because cluster role bindings use non-namespaced resources.

Table 2. Cluster role binding operations
Command Description

$ oc adm policy add-cluster-role-to-user <role> <username>

Binds a given role to specified users for all projects in the cluster.

$ oc adm policy remove-cluster-role-from-user <role> <username>

Removes a given role from specified users for all projects in the cluster.

$ oc adm policy add-cluster-role-to-group <role> <groupname>

Binds a given role to specified groups for all projects in the cluster.

$ oc adm policy remove-cluster-role-from-group <role> <groupname>

Removes a given role from specified groups for all projects in the cluster.

--rolebinding-name=

Can be used with oc adm policy commands to retain the rolebinding name assigned to a user or group.

For example, you can add the admin role to the alice user in joe-project by running:

$ oc adm policy add-role-to-user admin alice -n joe-project

You can then view the local role bindings and verify the addition in the output:

$ oc describe rolebinding.rbac -n joe-project
Name:		admin
Labels:		<none>
Annotations:	<none>
Role:
  Kind:	ClusterRole
  Name:	admin
Subjects:
  Kind	Name	Namespace
  ----	----	---------
  User	joe
  User	alice (1)


Name:		system:deployers
Labels:		<none>
Annotations:	<none>
Role:
  Kind:	ClusterRole
  Name:	system:deployer
Subjects:
  Kind			Name		Namespace
  ----			----		---------
  ServiceAccount	deployer	joe-project


Name:		system:image-builders
Labels:		<none>
Annotations:	<none>
Role:
  Kind:	ClusterRole
  Name:	system:image-builder
Subjects:
  Kind			Name	Namespace
  ----			----	---------
  ServiceAccount	builder	joe-project


Name:		system:image-pullers
Labels:		<none>
Annotations:	<none>
Role:
  Kind:	ClusterRole
  Name:	system:image-puller
Subjects:
  Kind	Name					Namespace
  ----	----					---------
  Group	system:serviceaccounts:joe-project
1 The alice user has been added to the admins RoleBinding.

Creating a local role

You can create a local role for a project and then bind it to a user.

  1. To create a local role for a project, run the following command:

    $ oc create role <name> --verb=<verb> --resource=<resource> -n <project>

    In this command, specify: * <name>, the local role’s name * <verb>, a comma-separated list of the verbs to apply to the role * <resource>, the resources that the role applies to * <project>, the project name

    + For example, to create a local role that allows a user to view pods in the blue project, run the following command:

    +

    $ oc create role podview --verb=get --resource=pod -n blue
  2. To bind the new role to a user, run the following command:

$ oc adm policy add-role-to-user podview user2 --role-namespace=blue -n blue

Creating a cluster role

To create a cluster role, run the following command:

$ oc create clusterrole <name> --verb=<verb> --resource=<resource>

In this command, specify:

  • <name>, the local role’s name

  • <verb>, a comma-separated list of the verbs to apply to the role

  • <resource>, the resources that the role applies to

For example, to create a cluster role that allows a user to view pods, run the following command:

$ oc create clusterrole podviewonly --verb=get --resource=pod

Cluster and local role bindings

A cluster role binding is a binding that exists at the cluster level. A role binding exists at the project level. The cluster role view must be bound to a user using a local role binding for that user to view the project. Create local roles only if a cluster role does not provide the set of permissions needed for a particular situation.

Some cluster role names are initially confusing. You can bind the cluster-admin to a user, using a local role binding, making it appear that this user has the privileges of a cluster administrator. This is not the case. Binding the cluster-admin to a certain project is more like a super administrator for that project, granting the permissions of the cluster role admin, plus a few additional permissions like the ability to edit rate limits. This can appear confusing especially via the web console UI, which does not list cluster role bindings that are bound to true cluster administrators. However, it does list local role bindings that you can use to locally bind cluster-admin.