# yum install httpd-tools
The openshift start
command and its subcommands (master
to launch a
master
server and node
to launch a
node
server) take a limited set of arguments that are sufficient for launching
servers in a development or experimental environment.
However, these arguments are insufficient to describe and control the full set of configuration and security options that are necessary in a production environment. To provide those options, it is necessary to use the master and node configuration files:
master host files at /etc/origin/master/master-config.yaml
Node host files at /etc/origin/node/node-config.yaml
These files define options including overriding the default plug-ins, connecting to etcd, automatically creating service accounts, building image names, customizing project requests, configuring volume plug-ins, and much more.
This topic covers the available options for customizing your OKD master and node hosts, and shows you how to make changes to the configuration after installation.
These files are fully specified with no default values. Therefore, an empty
value indicates that you want to start up with an empty value for that
parameter. This makes it easy to reason about exactly what your configuration
is, but it also makes it difficult to remember all of the options to specify. To
make this easier, the configuration files can be created with the
--write-config
option and then used with the --config
option.
For testing environments deployed via the quick install tool, one master should be sufficient. The quick installation method should not be used for production environments.
Production environments should be installed using the advanced install. In production environments, it is a good idea to use multiple masters for the purposes of high availability (HA). A cluster architecture of three masters is recommended, and HAproxy is the recommended solution for this.
If etcd is installed on the master hosts, you must configure your cluster to use at least three masters, because etcd would not be able to decide which one is authoritative. The only way to successfully run only two masters is if you install etcd on hosts other than the masters. |
The method you use to configure your master and node configuration files must match the method that was used to install your OKD cluster. If you followed the:
Advanced installation method using Ansible, then make your configuration changes in the Ansible playbook.
Quick installation or Manual installation method, then make your changes manually in the configuration files themselves.
For this section, familiarity with Ansible is assumed.
Only a portion of the available host configuration options are exposed to Ansible. After an OKD install, Ansible creates an inventory file with some substituted values. Modifying this inventory file and re-running the Ansible installer playbook is how you customize your OKD cluster.
While OKD supports using Ansible as the advanced install method, using an Ansible playbook and inventory file, you can also use other management tools, such as Puppet, Chef, Salt).
Use Case: Configuring the cluster to use HTPasswd authentication
|
To modify the Ansible inventory and make configuration changes:
Open the ./hosts inventory file:
[OSEv3:children] masters nodes [OSEv3:vars] ansible_ssh_user=cloud-user ansible_become=true openshift_deployment_type=openshift-enterprise [masters] ec2-52-6-179-239.compute-1.amazonaws.com openshift_ip=172.17.3.88 openshift_public_ip=52-6-179-239 openshift_hostname=master.example.com openshift_public_hostname=ose3-master.public.example.com containerized=True [nodes] ec2-52-6-179-239.compute-1.amazonaws.com openshift_ip=172.17.3.88 openshift_public_ip=52-6-179-239 openshift_hostname=master.example.com openshift_public_hostname=ose3-master.public.example.com containerized=True openshift_schedulable=False ec2-52-95-5-36.compute-1.amazonaws.com openshift_ip=172.17.3.89 openshift_public_ip=52.3.5.36 openshift_hostname=node.example.com openshift_public_hostname=ose3-node.public.example.com containerized=True
Add the following new variables to the [OSEv3:vars]
section of the file:
# htpasswd auth openshift_master_identity_providers=[{'name': 'htpasswd_auth', 'login': 'true', 'challenge': 'true', 'kind': 'HTPasswdPasswordIdentityProvider', 'filename': '/etc/origin/master/htpasswd'}] # Defining htpasswd users openshift_master_htpasswd_users={'<name>': '<hashed-password>', '<name>': '<hashed-password>'} # or #openshift_master_htpasswd_file=<path/to/local/pre-generated/htpasswdfile>
For HTPasswd authentication, you can use either the openshift_master_htpasswd_users
variable to create the specified user(s) and password(s) or the openshift_master_htpasswd_file
variable to specify a pre-generated flat file (the htpasswd file) with the users and passwords already created.
Because OKD requires a hashed password to configure HTPasswd authentication, you can use the htpasswd
command, as shown in the following section, to generate the hashed password(s) for your user(s) or to create the flat file with the users and associated hashed passwords.
The following example changes the authentication method from the default deny all
setting to htpasswd
and use the specified file to generate user IDs and passwords for the jsmith
and bloblaw
users.
# htpasswd auth openshift_master_identity_providers=[{'name': 'htpasswd_auth', 'login': 'true', 'challenge': 'true', 'kind': 'HTPasswdPasswordIdentityProvider', 'filename': '/etc/origin/master/htpasswd'}] # Defining htpasswd users openshift_master_htpasswd_users={'jsmith': '$apr1$wIwXkFLI$bAygtKGmPOqaJftB', 'bloblaw': '7IRJ$2ODmeLoxf4I6sUEKfiA$2aDJqLJe'} # or #openshift_master_htpasswd_file=<path/to/local/pre-generated/htpasswdfile>
Re-run the ansible playbook for these modifications to take effect:
$ ansible-playbook -b -i ./hosts ~/src/openshift-ansible/playbooks/deploy_cluster.yml
The playbook updates the configuration, and restarts the OKD master service to apply the changes.
You have now modified the master and node configuration files using Ansible, but this is just a simple use case. From here you can see which master and node configuration options are exposed to Ansible and customize your own Ansible inventory.
htpasswd
commmandTo configure the OKD cluster to use HTPasswd authentication, you need at least one user with a hashed password to include in the inventory file.
You can:
Generate the username and password to add directly to the ./hosts inventory file.
Create a flat file to pass the credentials to the ./hosts inventory file.
To create a user and hashed password:
Run the following command to add the specified user:
$ htpasswd -n <user_name>
You can include the $ htpasswd -nb <user_name> <password> |
Enter and confirm a clear-text password for the user.
For example:
$ htpasswd -n myuser New password: Re-type new password: myuser:$apr1$vdW.cI3j$WSKIOzUPs6Q
The command generates a hashed version of the password.
You can then use the hashed password when configuring HTPasswd authentication. The hashed password is the string after the :
. In the above example,you would enter:
openshift_master_htpasswd_users={'myuser': '$apr1$wIwXkFLI$bAygtISk2eKGmqaJftB'}
To create a flat file with a user name and hashed password:
Execute the following command:
$ htpasswd -c </path/to/users.htpasswd> <user_name>
You can include the $ htpasswd -c -b <user_name> <password> |
Enter and confirm a clear-text password for the user.
For example:
htpasswd -c users.htpasswd user1 New password: Re-type new password: Adding password for user user1
The command generates a file that includes the user name and a hashed version of the user’s password.
You can then use the password file when configuring HTPasswd authentication.
For more information on the |
After installing OKD using the quick install tool, you can make modifications to the master and node configuration files to customize your cluster.
Use Case: Configure the cluster to use HTPasswd authentication
To manually modify a configuration file:
Open the configuration file you want to modify, which in this case is the /etc/origin/master/master-config.yaml file:
Add the following new variables to the identityProviders
stanza of the file:
oauthConfig: ... identityProviders: - name: my_htpasswd_provider challenge: true login: true mappingMethod: claim provider: apiVersion: v1 kind: HTPasswdPasswordIdentityProvider file: /path/to/users.htpasswd
Save your changes and close the file.
Restart the master for the changes to take effect:
$ systemctl restart atomic-openshift-master-api atomic-openshift-master-controllers
You have now manually modified the master and node configuration files, but this is just a simple use case. From here you can see all the master and node configuration options, and further customize your own cluster by making further modifications.
This section reviews parameters mentioned in the master-config.yaml file.
You can create a new master configuration file to see the valid options for your installed version of OKD.
Whenever you modify the master-config.yaml file, you must restart the master for the changes to take effect. See Restarting OKD services. |
Parameter Name | Description |
---|---|
|
Contains the admission control plug-in configuration. OKD has a configurable list of admission controller plug-ins that are triggered whenever API objects are created or modified. This option allows you to override the default list of plug-ins; for example, disabling some plug-ins, adding others, changing the ordering, and specifying configuration. Both the list of plug-ins and their configuration can be controlled from Ansible. |
|
Key-value pairs that will be passed directly to the Kube API server that match
the API servers' command line arguments. These are not migrated, but if you
reference a value that does not exist the server will not start. These values
may override other settings in apiServerArguments: event-ttl: - "15m" |
|
Key-value pairs that will be passed directly to the Kube controller manager
that match the controller manager’s command line arguments. These are not
migrated, but if you reference a value that does not exist the server will not
start. These values may override other settings in |
|
Used to enable or disable various admission plug-ins. When this type is present
as the configuration object under |
|
Allows specifying a configuration file per admission control plug-in. |
|
A list of admission control plug-in names that will be installed on the master. Order is significant. If empty, a default list of plug-ins is used. |
|
Key-value pairs that will be passed directly to the Kube scheduler that match
the scheduler’s command line arguments. These are not migrated, but if you
reference a value that does not exist the server will not start. These values
may override other settings in |
Parameter Name | Description |
---|---|
|
If present, then the asset server starts based on the defined parameters. For example: assetConfig: logoutURL: "" masterPublicURL: https://master.ose32.example.com:8443 publicURL: https://master.ose32.example.com:8443/console/ servingInfo: bindAddress: 0.0.0.0:8443 bindNetwork: tcp4 certFile: master.server.crt clientCA: "" keyFile: master.server.key maxRequestsInFlight: 0 requestTimeoutSeconds: 0 |
|
To access the API server from a web application using a different host name, you
must whitelist that host name by specifying |
|
A list of features that should not be started. You will likely want to set this as null. It is very unlikely that anyone will want to manually disable features and that is not encouraged. |
|
Files to serve from the asset server file system under a subcontext. |
|
When set to true, tells the asset server to reload extension scripts and stylesheets for every request rather than only at startup. It lets you develop extensions without having to restart the server for every change. |
|
Key- (string) and value- (string) pairs that will be injected into the console under
the global variable |
|
File paths on the asset server files to load as scripts when the web console loads. |
|
File paths on the asset server files to load as style sheets when the web console loads. |
|
The public endpoint for logging (optional). |
|
An optional, absolute URL to redirect web browsers to after logging out of the web console. If not specified, the built-in logout page is shown. |
|
How the web console can access the OKD server. |
|
The public endpoint for metrics (optional). |
|
URL of the asset server. |
Parameter Name | Description |
---|---|
|
Holds authentication and authorization configuration options. |
|
Indicates how many authentication results should be cached. If 0, the default cache size is used. |
|
Indicates how long an authorization result should be cached. It takes a valid time duration string (e.g. "5m"). If empty, you get the default timeout. If zero (e.g. "0m"), caching is disabled. |
Parameter Name | Description |
---|---|
|
List of the controllers that should be started. If set to none, no
controllers will start automatically. The default value is * which will start
all controllers. When using *, you may exclude controllers by prepending a |
|
Enables controller election, instructing the master to attempt to acquire a
lease before controllers start and renewing it within a number of seconds
defined by this value. Setting this value non-negative forces
|
|
Instructs the master to not automatically start controllers, but instead to wait until a notification to the server is received before launching them. |
Parameter Name | Description |
---|---|
|
The advertised host:port for client connections to etcd. |
|
Contains information about how to connect to etcd. Specifies if etcd is run as embedded or non-embedded, and the hosts. The rest of the configuration is handled by the Ansible inventory. For example: etcdClientInfo: ca: ca.crt certFile: master.etcd-client.crt keyFile: master.etcd-client.key urls: - https://m1.aos.example.com:4001 |
|
If present, then etcd starts based on the defined parameters. For example: etcdConfig: address: master.ose32.example.com:4001 peerAddress: master.ose32.example.com:7001 peerServingInfo: bindAddress: 0.0.0.0:7001 certFile: etcd.server.crt clientCA: ca.crt keyFile: etcd.server.key servingInfo: bindAddress: 0.0.0.0:4001 certFile: etcd.server.crt clientCA: ca.crt keyFile: etcd.server.key storageDirectory: /var/lib/origin/openshift.local.etcd |
|
Contains information about how API resources are stored in etcd. These values are only relevant when etcd is the backing store for the cluster. |
|
The path within etcd that the Kubernetes resources will be rooted under. This value, if changed, will mean existing objects in etcd will no longer be located. The default value is kubernetes.io. |
|
The API version that Kubernetes resources in etcd should be serialized to. This value should not be advanced until all clients in the cluster that read from etcd have code that allows them to read the new version. |
|
The path within etcd that the OKD resources will be rooted under. This value, if changed, will mean existing objects in etcd will no longer be located. The default value is openshift.io. |
|
API version that OS resources in etcd should be serialized to. This value should not be advanced until all clients in the cluster that read from etcd have code that allows them to read the new version. |
|
The advertised host:port for peer connections to etcd. |
|
Describes how to start serving the etcd peer. |
|
Describes how to start serving. For example: servingInfo: bindAddress: 0.0.0.0:8443 bindNetwork: tcp4 certFile: master.server.crt clientCA: ca.crt keyFile: master.server.key maxRequestsInFlight: 500 requestTimeoutSeconds: 3600 |
|
The path to the etcd storage directory. |
Parameter Name | Description |
---|---|
|
Describes how to handle grants. |
|
Auto-approves client authorization grant requests. |
|
Auto-denies client authorization grant requests. |
|
Prompts the user to approve new client authorization grant requests. |
|
Determines the default strategy to use when an OAuth client requests a grant.This method will be used only if the specific OAuth client does not provide a strategy of their own. Valid grant handling methods are:
|
Parameter Name | Description |
---|---|
|
The format of the name to be built for the system component. |
|
Determines if the latest tag will be pulled from the registry. |
Parameter Name | Description |
---|---|
|
Allows scheduled background import of images to be disabled. |
|
Controls the number of images that are imported when a user does a bulk import of a Docker repository. This number defaults to 5 to prevent users from importing large numbers of images accidentally. Set -1 for no limit. |
|
The maximum number of scheduled image streams that will be imported in the background per minute. The default value is 60. |
|
The minimum number of seconds that can elapse between when image streams scheduled for background import are checked against the upstream repository. The default value is 15 minutes. |
|
Limits the docker registries that normal users may import images from. Set this list to the registries that you trust to contain valid Docker images and that you want applications to be able to import from. Users with permission to create Images or ImageStreamMappings via the API are not affected by this policy - typically only administrators or system integrations will have those permissions. |
|
Sets the hostname for the default internal image
registry. The value must be in |
|
ExternalRegistryHostname sets the hostname for the default external image
registry. The external hostname should be set only when the image registry
is exposed externally. The value is used in |
Parameter Name | Description |
---|---|
|
A list of API levels that should be enabled on startup, v1 as examples. |
|
A map of groups to the versions (or |
|
Contains information about how to connect to kubelets. |
|
Contains information about how to connect to kubelet’s KubernetesmasterConfig. If present, then start the kubernetes master with this process. |
|
The number of expected masters that should be running. This value defaults to 1 and may be set to a positive integer, or if set to -1, indicates this is part of a cluster. |
|
The public IP address of Kubernetes resources. If empty, the first result from
|
|
File name for the .kubeconfig file that describes how to connect this node to the master. |
|
The range to use for assigning service public ports on a host. Default 30000-32767. |
|
The subnet to use for assigning service IPs. |
|
The list of nodes that are statically known. |
Choose the CIDRs in the following parameters carefully, because the IPv4 address space is shared by all users of the nodes. OKD reserves CIDRs from the IPv4 address space for its own use, and reserves CIDRs from the IPv4 address space for addresses that are shared between the external user and the cluster.
Parameter Name | Description |
---|---|
|
The CIDR string to specify the global overlay network’s L3 space. This is reserved for the internal use of the cluster networking. |
|
Controls what values are acceptable for the service external IP field. If
empty, no |
|
The number of bits to allocate to each host’s subnet. For example, 8 would mean a /24 network on the host. |
|
Controls the range to assign ingress IPs from for services of type
LoadBalancer on bare metal. It may contain a single CIDR that it will be
allocated from. By default |
|
The number of bits to allocate to each host’s subnet. For example, 8 would mean a /24 network on the host. |
|
To be passed to the compiled-in-network plug-in. Many of the options here can be controlled in the Ansible inventory.
For Example: networkConfig: clusterNetworks - cidr: 10.3.0.0/16 hostSubnetLength: 8 networkPluginName: example/openshift-ovs-subnet # serviceNetworkCIDR must match kubernetesmasterConfig.servicesSubnet serviceNetworkCIDR: 179.29.0.0/16 |
|
The name of the network plug-in to use. |
|
The CIDR string to specify the service networks. |
Parameter Name | Description |
---|---|
|
Forces the provider selection page to render even when there is only a single provider. |
|
Used for building valid client redirect URLs for external access. |
|
A path to a file containing a go template used to render error pages during the authentication or grant flow If unspecified, the default error page is used. |
|
Ordered list of ways for a user to identify themselves. |
|
A path to a file containing a go template used to render the login page. If unspecified, the default login page is used. |
|
CA for verifying the TLS connection back to the |
|
Used for building valid client redirect URLs for external access. |
|
Used for making server-to-server calls to exchange authorization codes for access tokens. |
|
If present, then the /oauth endpoint starts based on the defined parameters. For example: oauthConfig: assetPublicURL: https://master.ose32.example.com:8443/console/ grantConfig: method: auto identityProviders: - challenge: true login: true mappingMethod: claim name: htpasswd_all provider: apiVersion: v1 kind: HTPasswdPasswordIdentityProvider file: /etc/origin/openshift-passwd masterCA: ca.crt masterPublicURL: https://master.ose32.example.com:8443 masterURL: https://master.ose32.example.com:8443 sessionConfig: sessionMaxAgeSeconds: 3600 sessionName: ssn sessionSecretsFile: /etc/origin/master/session-secrets.yaml tokenConfig: accessTokenMaxAgeSeconds: 86400 authorizeTokenMaxAgeSeconds: 500 |
|
Allows for customization of pages like the login page. |
|
A path to a file containing a go template used to render the provider selection page. If unspecified, the default provider selection page is used. |
|
Holds information about configuring sessions. |
|
Allows you to customize pages like the login page. |
|
Contains options for authorization and access tokens. |
Parameter Name | Description |
---|---|
|
Holds default project node label selector. |
|
Holds information about project creation and defaults:
|
|
The string presented to a user if they are unable to request a project via the project request API endpoint. |
|
The template to use for creating projects in response to a projectrequest. It is in the format namespace/template and it is optional. If it is not specified, a default template is used. |
Parameter Name | Description |
---|---|
|
Points to a file that describes how to set up the scheduler. If empty, you get the default scheduling rules |
Parameter Name | Description |
---|---|
|
Defines the range of MCS categories that will be assigned to namespaces. The
format is |
|
Controls the automatic allocation of UIDs and MCS labels to a project. If nil, allocation is disabled. |
|
Defines the total set of Unix user IDs (UIDs) that will be allocated to projects automatically, and the size of the block that each namespace gets. For example, 1000-1999/10 will allocate ten UIDs per namespace, and will be able to allocate up to 100 blocks before running out of space. The default is to allocate from 1 billion to 2 billion in 10k blocks (which is the expected size of the ranges container images will use once user namespaces are started). |
Parameter Name | Description |
---|---|
|
Controls whether or not to allow a service account to reference any secret in a namespace without explicitly referencing them. |
|
A list of service account names that will be auto-created in every namespace.
If no names are specified, the |
|
The CA for verifying the TLS connection back to the master. The service account controller will automatically inject the contents of this file into pods so they can verify connections to the master. |
|
A file containing a PEM-encoded private RSA key, used to sign service account
tokens. If no private key is specified, the service account |
|
A list of files, each containing a PEM-encoded public RSA key. If any file contains a private key, the public portion of the key is used. The list of public keys is used to verify presented service account tokens. Each key is tried in order until the list is exhausted or verification succeeds. If no keys are specified, no service account authentication will be available. |
|
Holds options related to service accounts:
|
Parameter Name | Description |
---|---|
|
Allows the DNS server on the master to answer queries recursively. Note that open resolvers can be used for DNS amplification attacks and the master DNS should not be made accessible to public networks. |
|
The ip:port to serve on. |
|
Controls limits and behavior for importing images. |
|
A file containing a PEM-encoded certificate. |
|
TLS cert information for serving secure traffic. |
|
The certificate bundle for all the signers that you recognize for incoming client certificates. |
|
If present, then start the DNS server based on the defined parameters. For example: dnsConfig: bindAddress: 0.0.0.0:8053 bindNetwork: tcp4 |
|
Holds the domain suffix. |
|
Holds the IP. |
|
A file containing a PEM-encoded private key for the certificate specified by
|
|
Provides overrides to the client connection used to connect to the master. This parameter is not supported. To set QPS and burst values, see Setting Node Queries per Second (QPS) Limits and Burst Values. |
|
The number of concurrent requests allowed to the server. If zero, no limit. |
|
A list of certificates to use to secure requests to specific host names. |
|
The number of seconds before requests are timed out. The default is 60 minutes. If -1, there is no limit on requests. |
|
The HTTP serving information for the assets. |
Parameter Name | Description |
---|---|
|
A boolean to enable or disable dynamic provisioning. Default is true. |
FSGroup |
Can be specified to enable a quota on local storage use per unique FSGroup ID.
At present this is only implemented for emptyDir volumes, and if the underlying
|
|
Contains options for controlling local volume quota on the node. |
|
Contains options for configuring volume plug-ins in the master node. |
|
Contains options for configuring volumes on the node. |
|
Contains options for configuring volume plug-ins in the node:
|
|
The directory that volumes are stored under. |
Audit provides a security-relevant chronological set of records documenting the sequence of activities that have affected system by individual users, administrators, or other components of the system.
Audit works at the API server level, logging all requests coming to the server. Each audit log contains two entries:
The request line containing:
A Unique ID allowing to match the response line (see #2)
The source IP of the request
The HTTP method being invoked
The original user invoking the operation
The impersonated user for the operation (self
meaning himself)
The impersonated group for the operation (lookup
meaning user’s group)
The namespace of the request or <none>
The URI as requested
The response line containing:
The unique ID from #1
The response code
Example output for user admin asking for a list of pods:
AUDIT: id="5c3b8227-4af9-4322-8a71-542231c3887b" ip="127.0.0.1" method="GET" user="admin" as="<self>" asgroups="<lookup>" namespace="default" uri="/api/v1/namespaces/default/pods" AUDIT: id="5c3b8227-4af9-4322-8a71-542231c3887b" response="200"
The openshift_master_audit_config
variable enables API service auditing. It
takes an array of the following options:
Parameter Name | Description |
---|---|
|
A boolean to enable or disable audit logs. Default is |
|
File path where the requests should be logged to. If not set, logs are printed to master logs. |
|
Specifies maximum number of days to retain old audit log files based on the time stamp encoded in their filename. |
|
Specifies the maximum number of old audit log files to retain. |
|
Specifies maximum size in megabytes of the log file before it gets rotated. Defaults to 100MB. |
auditConfig: auditFilePath: "/var/lib/audit-ocp.log" enabled: true maximumFileRetentionDays: 10 maximumFileSizeMegabytes: 10 maximumRetainedFiles: 10
If you want more advanced setup for the audit log, you can use:
openshift_master_audit_config={"enabled": true}
The directory in auditFilePath
will be created if it does not exist.
openshift_master_audit_config={"enabled": true, "auditFilePath": "/var/lib/openpaas-oscp-audit/openpaas-oscp-audit.log", "maximumFileRetentionDays": 14, "maximumFileSizeMegabytes": 500, "maximumRetainedFiles": 5}
The advanced audit feature provides several improvements over the basic audit functionality, including fine-grained events filtering and multiple output back ends.
To enable the advanced audit feature, provide the following values in the openshift_master_audit_config
parameter
openshift_master_audit_config={"enabled": true, "auditFilePath": "/var/lib/oscp-audit/-oscp-audit.log", "maximumFileRetentionDays": 14, "maximumFileSizeMegabytes": 500, "maximumRetainedFiles": 5, "policyFile": "/etc/security/adv-audit.yaml", "logFormat":"json"}
The policy file /etc/security/adv-audit.yaml must be available on each master node. |
The following table contains additional options you can use.
Parameter Name | Description |
---|---|
|
Path to the file that defines the audit policy configuration. |
|
An embedded audit policy configuration. |
|
Specifies the format of the saved audit logs. Allowed values are |
|
Path to a |
|
Specifies the strategy for sending audit events. Allowed values are |
To enable the advanced audit feature, you must provide either |
apiVersion: audit.k8s.io/v1beta1
kind: Policy
rules:
# Do not log watch requests by the "system:kube-proxy" on endpoints or services
- level: None (1)
users: ["system:kube-proxy"] (2)
verbs: ["watch"] (3)
resources: (4)
- group: ""
resources: ["endpoints", "services"]
# Do not log authenticated requests to certain non-resource URL paths.
- level: None
userGroups: ["system:authenticated"] (5)
nonResourceURLs: (6)
- "/api*" # Wildcard matching.
- "/version"
# Log the request body of configmap changes in kube-system.
- level: Request
resources:
- group: "" # core API group
resources: ["configmaps"]
# This rule only applies to resources in the "kube-system" namespace.
# The empty string "" can be used to select non-namespaced resources.
namespaces: ["kube-system"] (7)
# Log configmap and secret changes in all other namespaces at the metadata level.
- level: Metadata
resources:
- group: "" # core API group
resources: ["secrets", "configmaps"]
# Log all other resources in core and extensions at the request level.
- level: Request
resources:
- group: "" # core API group
- group: "extensions" # Version of group should NOT be included.
# A catch-all rule to log all other requests at the Metadata level.
- level: Metadata (1)
# Log login failures from the web console or CLI. Review the logs and refine your policies.
- level: Metadata
nonResourceURLs:
- /login* (8)
- /oauth* (9)
1 | There are four possible levels every event can be logged at:
|
2 | A list of users the rule applies to. An empty list implies every user. |
3 | A list of verbs this rule applies to. An empty list implies every verb. This is
Kubernetes verb associated with API requests (including get , list , watch ,
create , update , patch , delete , deletecollection , and proxy ). |
4 | A list of resources the rule applies to. An empty list implies every resource. Each resource is specified as a group it is assigned to (for example, an empty for Kubernetes core API, batch, build.openshift.io, etc.), and a resource list from that group. |
5 | A list of groups the rule applies to. An empty list implies every group. |
6 | A list of non-resources URLs the rule applies to. |
7 | A list of namespaces the rule applies to. An empty list implies every namespace. |
8 | Endpoint used by the web console. |
9 | Endpoint used by the CLI. |
For more information on advanced audit, see the Kubernetes documentation
You can specify the supported TLS ciphers to use in communication between the master and etcd servers.
On each etcd node, upgrade etcd:
# yum update etcd iptables-services
Confirm that your etcd version is 3.2.22 or later:
# etcd --version etcd Version: 3.2.22
On each master host, specify the ciphers to enable in the
/etc/origin/master/master-config.yaml
file:
servingInfo: ... minTLSVersion: VersionTLS12 cipherSuites: - TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256 - TLS_RSA_WITH_AES_256_CBC_SHA - TLS_RSA_WITH_AES_128_CBC_SHA ...
On each master host, restart the master service:
# systemctl restart atomic-openshift-master-api atomic-openshift-master-controllers
Confirm that the cipher is applied. For example, for TLSv1.2 cipher
ECDHE-RSA-AES128-GCM-SHA256
, run the following command:
# openssl s_client -connect etcd1.example.com:2379 (1) CONNECTED(00000003) depth=0 CN = etcd1.example.com verify error:num=20:unable to get local issuer certificate verify return:1 depth=0 CN = etcd1.example.com verify error:num=21:unable to verify the first certificate verify return:1 139905367488400:error:14094412:SSL routines:ssl3_read_bytes:sslv3 alert bad certificate:s3_pkt.c:1493:SSL alert number 42 139905367488400:error:140790E5:SSL routines:ssl23_write:ssl handshake failure:s23_lib.c:177: --- Certificate chain 0 s:/CN=etcd1.example.com i:/CN=etcd-signer@1529635004 --- Server certificate -----BEGIN CERTIFICATE----- MIIEkjCCAnqgAwIBAgIBATANBgkqhkiG9w0BAQsFADAhMR8wHQYDVQQDDBZldGNk ........ .... eif87qttt0Sl1vS8DG1KQO1oOBlNkg== -----END CERTIFICATE----- subject=/CN=etcd1.example.com issuer=/CN=etcd-signer@1529635004 --- Acceptable client certificate CA names /CN=etcd-signer@1529635004 Client Certificate Types: RSA sign, ECDSA sign Requested Signature Algorithms: RSA+SHA256:ECDSA+SHA256:RSA+SHA384:ECDSA+SHA384:RSA+SHA1:ECDSA+SHA1 Shared Requested Signature Algorithms: RSA+SHA256:ECDSA+SHA256:RSA+SHA384:ECDSA+SHA384:RSA+SHA1:ECDSA+SHA1 Peer signing digest: SHA384 Server Temp Key: ECDH, P-256, 256 bits --- SSL handshake has read 1666 bytes and written 138 bytes --- New, TLSv1/SSLv3, Cipher is ECDHE-RSA-AES128-GCM-SHA256 Server public key is 2048 bit Secure Renegotiation IS supported Compression: NONE Expansion: NONE No ALPN negotiated SSL-Session: Protocol : TLSv1.2 Cipher : ECDHE-RSA-AES128-GCM-SHA256 Session-ID: Session-ID-ctx: master-Key: 1EFA00A91EE5FC5EDDCFC67C8ECD060D44FD3EB23D834EDED929E4B74536F273C0F9299935E5504B562CD56E76ED208D Key-Arg : None Krb5 Principal: None PSK identity: None PSK identity hint: None Start Time: 1529651744 Timeout : 300 (sec) Verify return code: 21 (unable to verify the first certificate)
1 | etcd1.example.com is the name of an etcd host. |
The following node-config.yaml file is a sample node configuration file that was generated with the default values as of writing. You can create a new node configuration file to see the valid options for your installed version of OKD.
allowDisabledDocker: false
apiVersion: v1
authConfig:
authenticationCacheSize: 1000
authenticationCacheTTL: 5m
authorizationCacheSize: 1000
authorizationCacheTTL: 5m
dnsDomain: cluster.local
dnsIP: 10.0.2.15 (1)
dockerConfig:
execHandlerName: native
imageConfig:
format: openshift/origin-${component}:${version}
latest: false
iptablesSyncPeriod: 5s
kind: NodeConfig
masterKubeConfig: node.kubeconfig
networkConfig:
mtu: 1450
networkPluginName: ""
nodeIP: ""
nodeName: node1.example.com
podManifestConfig: (2)
path: "/path/to/pod-manifest-file" (3)
fileCheckIntervalSeconds: 30 (4)
proxyArguments:
proxy-mode:
- iptables (5)
volumeConfig:
localQuota:
perFSGroup: null(6)
servingInfo:
bindAddress: 0.0.0.0:10250
bindNetwork: tcp4
certFile: server.crt
clientCA: node-client-ca.crt
keyFile: server.key
namedCertificates: null
volumeDirectory: /root/openshift.local.volumes
1 | Configures an IP address to be prepended to a pod’s /etc/resolv.conf by adding the address here. |
2 | Allows pods to be placed directly on certain set of nodes, or on all nodes without going through the scheduler. You can then use pods to perform the same administrative tasks and support the same services on each node. |
3 | Specifies the path for the pod manifest file or directory. If it is a directory, then it is expected to contain one or more manifest files. This is used by the Kubelet to create pods on the node. |
4 | This is the interval (in seconds) for checking the manifest file for new data. The interval must be a positive value. |
5 | The service proxy implementation to use. |
6 | Preliminary support for local emptyDir volume quotas, set this value to a resource quantity representing the desired quota per FSGroup, per node. (i.e. 1Gi, 512Mi, etc) Currently requires that the volumeDirectory be on an XFS filesystem mounted with the 'gquota' option, and the matching security context contraint’s fsGroup type set to 'MustRunAs'. |
The node configuration file determines the resources of a node. See the Allocating node resources section in the Cluster Administrator guide for more information.
Parameter Name | Description |
---|---|
|
The fully specified configuration starting an OKD node. |
|
Node may have multiple IPs, so this specifies the IP to use for pod traffic routing. If not specified, network parse/lookup on the nodeName is performed and the first non-loopback address is used. |
|
The value used to identify this particular node in the cluster. If possible, this should be your fully qualified hostname. If you are describing a set of static nodes to the master, this value must match one of the values in the list. |
|
Controls grace period for deleting pods on failed nodes. It takes valid time duration string. If empty, you get the default pod eviction timeout. |
|
Specifies the client cert/key to use when proxying to pods. |
Parameter Name | Description |
---|---|
|
If true, the kubelet will ignore errors from Docker. This means that a node can start on a machine that does not have docker started. |
|
Holds Docker related configuration options |
|
The handler to use for executing commands in Docker containers. |
The rate at which Kubelet talks to API server depends on Queries per Second (QPS) and burst values. The default values are good enough if there are limited pods running on each node. Provided there are enough CPU and memory resources on the node, the QPS and burst values can be tweaked in the /etc/origin/node/node-config.yaml file:
kubeletArguments: kube-api-qps: - "20" kube-api-burst: - "40"
The QPS and burst values above are defaults for OKD. |
If you are using Docker 1.9+, you may want to consider enabling parallel image pulling, as the default is to pull images one at a time.
There is a potential issue with data corruption prior to Docker 1.9. However, starting with 1.9, the corruption issue is resolved and it is safe to switch to parallel pulls. |
kubeletArguments:
serialize-image-pulls:
- "false" (1)
1 | Change to true to disable parallel pulls. (This is the default config) |
For some authentication configurations,
an LDAP bindPassword
or OAuth clientSecret
value is required.
Instead of specifying these values directly in the master configuration file,
these values may be provided as environment variables, external files,
or in encrypted files.
...
bindPassword:
env: BIND_PASSWORD_ENV_VAR_NAME
...
bindPassword:
file: bindPassword.txt
...
bindPassword:
file: bindPassword.encrypted
keyFile: bindPassword.key
To create the encrypted file and key file for the above example:
$ oc adm ca encrypt --genkey=bindPassword.key --out=bindPassword.encrypted > Data to encrypt: B1ndPass0rd!
Run oc adm
commands only from the first master listed in the Ansible host inventory file,
by default /etc/ansible/hosts.
Encrypted data is only as secure as the decrypting key. Care should be taken to limit filesystem permissions and access to the key file. |
When defining an OKD configuration from scratch, start by creating new configuration files.
For master host configuration files, use the openshift start
command with the
--write-config
option to write the configuration files. For node hosts, use
the oc adm create-node-config
command to write the configuration files.
The following commands write the relevant launch configuration file(s),
certificate files, and any other necessary files to the specified
--write-config
or --node-dir
directory.
Generated certificate files are valid for two years, while the certification
authority (CA) certificate is valid for five years. This can be altered with the
--expire-days
and --signer-expire-days
options, but for security reasons, it
is recommended to not make them greater than these values.
To create configuration files for an all-in-one server (a master and a node on the same host) in the specified directory:
$ openshift start --write-config=/openshift.local.config
To create a master configuration file and other required files in the specified directory:
$ openshift start master --write-config=/openshift.local.config/master
To create a node configuration file and other related files in the specified directory:
$ oc adm create-node-config \ --node-dir=/openshift.local.config/node-<node_hostname> \ --node=<node_hostname> \ --hostnames=<node_hostname>,<ip_address> \ --certificate-authority="/path/to/ca.crt" \ --signer-cert="/path/to/ca.crt" \ --signer-key="/path/to/ca.key" --signer-serial="/path/to/ca.serial.txt" --node-client-certificate-authority="/path/to/ca.crt"
When creating node configuration files, the --hostnames
option accepts a
comma-delimited list of every host name or IP address you want server
certificates to be valid for.
Once you have modified the master and/or node configuration files to your specifications, you can use them when launching servers by specifying them as an argument. Keep in mind that if you specify a configuration file, none of the other command line options you pass are respected.
To launch an all-in-one server using a master configuration and a node configuration file:
$ openshift start --master-config=/openshift.local.config/master/master-config.yaml --node-config=/openshift.local.config/node-<node_hostname>/node-config.yaml
To launch a master server using a master configuration file:
$ openshift start master --config=/openshift.local.config/master/master-config.yaml
To launch a node server using a node configuration file:
$ openshift start node --config=/openshift.local.config/node-<node_hostname>/node-config.yaml
OKD uses the systemd-journald.service
to collect log messages for debugging.
The number of lines displayed in the web console is hard-coded at 5000 and cannot be changed. To see the entire log, use the CLI. |
The logging uses five log message severities based on Kubernetes logging conventions, as follows:
The number of lines displayed in the web console is hard-coded at 5000 and cannot be changed. To see the entire log, use the CLI. |
The logging uses five log message severities based on Kubernetes logging conventions, as follows:
Option | Description |
---|---|
0 |
Errors and warnings only |
2 |
Normal information |
4 |
Debugging-level information |
6 |
API-level debugging information (request / response) |
8 |
Body-level API debugging information |
You can control which INFO messages are logged by setting the loglevel option in the in /etc/sysconfig/atomic-openshift-node, the /etc/sysconfig/atomic-openshift-master-api file and the /etc/sysconfig/atomic-openshift-master-controllers file. Configuring the logs to collect all messages can lead to large logs that are difficult to interpret and can take up excessive space. Collecting all messages should only be used in debug situations.
Messages with FATAL, ERROR, WARNING and some INFO severities appear in the logs regardless of the log configuration. |
You can view logs for the master or the node system using the following command:
# journalctl -r -u <journal_name>
Use the -r
option to show the newest entries first.
For example:
# journalctl -r -u atomic-openshift-master-controllers # journalctl -r -u atomic-openshift-master-api # journalctl -r -u atomic-openshift-node.service
To change the logging level:
Edit the /etc/sysconfig/atomic-openshift-master file for the master or /etc/sysconfig/atomic-openshift-node file for the nodes.
Enter a value from the Log Level Options table above in the OPTIONS=--loglevel=
field.
For example:
OPTIONS=--loglevel=4
Restart the master or node host as appropriate. See Restarting OKD services.
After the restart, all new log messages will conform to the new setting. Older messages do not change.
The default log level can be set using the Advanced Install. For more information, see Cluster Variables. |
The following examples are excerpts from a master journald log at various log levels. Timestamps and system information have been removed from these examples.
4897 plugins.go:77] Registered admission plugin "NamespaceLifecycle" 4897 start_master.go:290] Warning: assetConfig.loggingPublicURL: Invalid value: "": required to view aggregated container logs in the console, master start will continue. 4897 start_master.go:290] Warning: assetConfig.metricsPublicURL: Invalid value: "": required to view cluster metrics in the console, master start will continue. 4897 start_master.go:290] Warning: aggregatorConfig.proxyClientInfo: Invalid value: "": if no client certificate is specified, the aggregator will be unable to proxy to remote servers, 4897 start_master.go:412] Starting controllers on 0.0.0.0:8444 (v3.7.14) 4897 start_master.go:416] Using images from "openshift3/ose-<component>:v3.7.14" 4897 standalone_apiserver.go:106] Started health checks at 0.0.0.0:8444 4897 plugins.go:77] Registered admission plugin "NamespaceLifecycle" 4897 configgetter.go:53] Initializing cache sizes based on 0MB limit 4897 leaderelection.go:105] Attempting to acquire openshift-master-controllers lease as master-bkr-hv03-guest44.dsal.lab.eng.bos.redhat.com-10.19.41.74-xtz6lbqb, renewing every 3s, hold 4897 leaderelection.go:179] attempting to acquire leader lease... systemd[1]: Started Atomic OpenShift master Controllers. 4897 leaderelection.go:189] successfully acquired lease kube-system/openshift-master-controllers 4897 event.go:218] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"kube-system", Name:"openshift-master-controllers", UID:"aca86731-ffbe-11e7-8d33-525400c845a8", APIVersion:"v1", 4897 start_master.go:627] Started serviceaccount-token controller 4897 factory.go:351] Creating scheduler from configuration: {{ } [{NoVolumeZoneConflict <nil>} {MaxEBSVolumeCount <nil>} {MaxGCEPDVolumeCount <nil>} {MaxAzureDiskVolumeCount <nil>} {Mat 4897 factory.go:360] Registering predicate: NoVolumeZoneConflict 4897 plugins.go:145] Predicate type NoVolumeZoneConflict already registered, reusing. 4897 factory.go:360] Registering predicate: MaxEBSVolumeCount 4897 plugins.go:145] Predicate type MaxEBSVolumeCount already registered, reusing. 4897 factory.go:360] Registering predicate: MaxGCEPDVolumeCount
4897 master.go:47] Initializing SDN master of type "redhat/openshift-ovs-subnet" 4897 master.go:107] Created ClusterNetwork default (network: "10.128.0.0/14", hostSubnetBits: 9, serviceNetwork: "172.30.0.0/16", pluginName: "redhat/openshift-ovs-subnet") 4897 start_master.go:690] Started "openshift.io/sdn" 4897 start_master.go:680] Starting "openshift.io/service-serving-cert" 4897 controllermanager.go:466] Started "namespace" 4897 controllermanager.go:456] Starting "daemonset" 4897 controller_utils.go:1025] Waiting for caches to sync for namespace controller 4897 shared_informer.go:298] resyncPeriod 120000000000 is smaller than resyncCheckPeriod 600000000000 and the informer has already started. Changing it to 600000000000 4897 start_master.go:690] Started "openshift.io/service-serving-cert" 4897 start_master.go:680] Starting "openshift.io/image-signature-import" 4897 start_master.go:690] Started "openshift.io/image-signature-import" 4897 start_master.go:680] Starting "openshift.io/templateinstance" 4897 controllermanager.go:466] Started "daemonset" 4897 controllermanager.go:456] Starting "statefulset" 4897 daemoncontroller.go:222] Starting daemon sets controller 4897 controller_utils.go:1025] Waiting for caches to sync for daemon sets controller 4897 controllermanager.go:466] Started "statefulset" 4897 controllermanager.go:456] Starting "cronjob" 4897 stateful_set.go:147] Starting stateful set controller 4897 controller_utils.go:1025] Waiting for caches to sync for stateful set controller 4897 start_master.go:690] Started "openshift.io/templateinstance" 4897 start_master.go:680] Starting "openshift.io/horizontalpodautoscaling
4897 factory.go:366] Registering priority: Zone 4897 factory.go:401] Creating scheduler with fit predicates 'map[GeneralPredicates:{} CheckNodeMemoryPressure:{} CheckNodeDiskPressure:{} Region:{} NoVolumeZoneC 4897 controller_utils.go:1025] Waiting for caches to sync for tokens controller 4897 controllermanager.go:108] Version: v1.7.6+a08f5eeb62 4897 leaderelection.go:179] attempting to acquire leader lease... 4897 leaderelection.go:189] successfully acquired lease kube-system/kube-controller-manager 4897 event.go:218] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"kube-system", Name:"kube-controller-manager", UID:"acb3e9c6-ffbe-11e7-8d33-525400c845a8", APIVersion:"v1", Resou 4897 controller_utils.go:1032] Caches are synced for tokens controller 4897 plugins.go:101] No cloud provider specified. 4897 controllermanager.go:481] "serviceaccount-token" is disabled 4897 controllermanager.go:450] "bootstrapsigner" is disabled 4897 controllermanager.go:450] "tokencleaner" is disabled 4897 controllermanager.go:456] Starting "garbagecollector" 4897 start_master.go:680] Starting "openshift.io/build" 4897 controllermanager.go:466] Started "garbagecollector" 4897 controllermanager.go:456] Starting "deployment" 4897 garbagecollector.go:126] Starting garbage collector controller 4897 controller_utils.go:1025] Waiting for caches to sync for garbage collector controller 4897 controllermanager.go:466] Started "deployment" 4897 controllermanager.go:450] "horizontalpodautoscaling" is disabled 4897 controllermanager.go:456] Starting "disruption" 4897 deployment_controller.go:152] Starting deployment controller
4897 plugins.go:77] Registered admission plugin "NamespaceLifecycle" 4897 start_master.go:290] Warning: assetConfig.loggingPublicURL: Invalid value: "": required to view aggregated container logs in the console, master start will continue. 4897 start_master.go:290] Warning: assetConfig.metricsPublicURL: Invalid value: "": required to view cluster metrics in the console, master start will continue. 4897 start_master.go:290] Warning: aggregatorConfig.proxyClientInfo: Invalid value: "": if no client certificate is specified, the aggregator will be unable to proxy to remote serv 4897 start_master.go:412] Starting controllers on 0.0.0.0:8444 (v3.7.14) 4897 start_master.go:416] Using images from "openshift3/ose-<component>:v3.7.14" 4897 standalone_apiserver.go:106] Started health checks at 0.0.0.0:8444 4897 plugins.go:77] Registered admission plugin "NamespaceLifecycle" 4897 configgetter.go:53] Initializing cache sizes based on 0MB limit 4897 leaderelection.go:105] Attempting to acquire openshift-master-controllers lease as master-bkr-hv03-guest44.dsal.lab.eng.bos.redhat.com-10.19.41.74-xtz6lbqb, renewing every 3s, 4897 leaderelection.go:179] attempting to acquire leader lease... systemd[1]: Started Atomic OpenShift master Controllers. 4897 leaderelection.go:189] successfully acquired lease kube-system/openshift-master-controllers 4897 event.go:218] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"kube-system", Name:"openshift-master-controllers", UID:"aca86731-ffbe-11e7-8d33-525400c845a8", APIVersion:" 4897 start_master.go:627] Started serviceaccount-token controller
4613 plugins.go:77] Registered admission plugin "NamespaceLifecycle" 4613 master.go:320] Starting Web Console https://bkr-hv03-guest44.dsal.lab.eng.bos.redhat.com:8443/console/ 4613 master.go:329] Starting OAuth2 API at /oauth 4613 master.go:320] Starting Web Console https://bkr-hv03-guest44.dsal.lab.eng.bos.redhat.com:8443/console/ 4613 master.go:329] Starting OAuth2 API at /oauth 4613 master.go:320] Starting Web Console https://bkr-hv03-guest44.dsal.lab.eng.bos.redhat.com:8443/console/ 4613 master.go:329] Starting OAuth2 API at /oauth 4613 swagger.go:38] No API exists for predefined swagger description /oapi/v1 4613 swagger.go:38] No API exists for predefined swagger description /api/v1 [restful] 2018/01/22 16:53:14 log.go:33: [restful/swagger] listing is available at https://bkr-hv03-guest44.dsal.lab.eng.bos.redhat.com:8443/swaggerapi [restful] 2018/01/22 16:53:14 log.go:33: [restful/swagger] https://bkr-hv03-guest44.dsal.lab.eng.bos.redhat.com:8443/swaggerui/ is mapped to folder /swagger-ui/ 4613 master.go:320] Starting Web Console https://bkr-hv03-guest44.dsal.lab.eng.bos.redhat.com:8443/console/ 4613 master.go:329] Starting OAuth2 API at /oauth 4613 swagger.go:38] No API exists for predefined swagger description /oapi/v1 4613 swagger.go:38] No API exists for predefined swagger description /api/v1 [restful] 2018/01/22 16:53:14 log.go:33: [restful/swagger] listing is available at https://bkr-hv03-guest44.dsal.lab.eng.bos.redhat.com:8443/swaggerapi [restful] 2018/01/22 16:53:14 log.go:33: [restful/swagger] https://bkr-hv03-guest44.dsal.lab.eng.bos.redhat.com:8443/swaggerui/ is mapped to folder /swagger-ui/ Starting Web Console https://bkr-hv03-guest44.dsal.lab.eng.bos.redhat.com:8443/console/ Starting OAuth2 API at /oauth No API exists for predefined swagger description /oapi/v1 No API exists for predefined swagger description /api/v1