$ ssh-keygen -t ed25519 -N '' -f <path>/<file_name> (1)
In OKD 4.15, you can install a cluster on Alibaba Cloud with customized network configuration options. By customizing your network configuration, your cluster can coexist with existing IP address allocations in your environment and integrate with existing MTU and VXLAN configurations.
You must set most of the network configuration parameters during installation, and you can modify only kubeProxy
configuration parameters in a running cluster.
Alibaba Cloud on OKD is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope. |
You reviewed details about the OKD installation and update processes.
You read the documentation on selecting a cluster installation method and preparing it for users.
If you use a firewall, you configured it to allow the sites that your cluster requires access to.
If the cloud Resource Access Management (RAM) APIs are not accessible in your environment, or if you do not want to store an administrator-level credential secret in the kube-system
namespace, you can manually create and maintain Resource Access Management (RAM) credentials.
During an OKD installation, you can provide an SSH public key to the installation program. The key is passed to the Fedora CoreOS (FCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys
list for the core
user on each node, which enables password-less authentication.
After the key is passed to the nodes, you can use the key pair to SSH in to the FCOS nodes as the user core
. To access the nodes through SSH, the private key identity must be managed by SSH for your local user.
If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather
command also requires the SSH public key to be in place on the cluster nodes.
Do not skip this procedure in production environments, where disaster recovery and debugging is required. |
You must use a local key, not one that you configured with platform-specific approaches such as AWS key pairs. |
On clusters running Fedora CoreOS (FCOS), the SSH keys specified in the Ignition config files are written to the |
If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command:
$ ssh-keygen -t ed25519 -N '' -f <path>/<file_name> (1)
1 | Specify the path and file name, such as ~/.ssh/id_ed25519 , of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory. |
If you plan to install an OKD cluster that uses the Fedora cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the |
View the public SSH key:
$ cat <path>/<file_name>.pub
For example, run the following to view the ~/.ssh/id_ed25519.pub
public key:
$ cat ~/.ssh/id_ed25519.pub
Add the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the ./openshift-install gather
command.
On some distributions, default SSH private key identities such as |
If the ssh-agent
process is not already running for your local user, start it as a background task:
$ eval "$(ssh-agent -s)"
Agent pid 31874
If your cluster is in FIPS mode, only use FIPS-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA. |
Add your SSH private key to the ssh-agent
:
$ ssh-add <path>/<file_name> (1)
1 | Specify the path and file name for your SSH private key, such as ~/.ssh/id_ed25519 |
Identity added: /home/<you>/<path>/<file_name> (<computer_name>)
When you install OKD, provide the SSH public key to the installation program.
Before you install OKD, download the installation file on the host you are using for installation.
You have a computer that runs Linux or macOS, with at least 1.2 GB of local disk space.
Download the installation program from https://github.com/openshift/okd/releases.
|
Extract the installation program. For example, on a computer that uses a Linux operating system, run the following command:
$ tar -xvf openshift-install-linux.tar.gz
Download your installation pull secret from Red Hat OpenShift Cluster Manager. This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OKD components.
Using a pull secret from Red Hat OpenShift Cluster Manager is not required. You can use a pull secret for another private registry. Or, if you do not need the cluster to pull images from a private registry, you can use {"auths":{"fake":{"auth":"aWQ6cGFzcwo="}}}
as the pull secret when prompted during the installation.
If you do not use the pull secret from Red Hat OpenShift Cluster Manager:
Red Hat Operators are not available.
The Telemetry and Insights operators do not send data to Red Hat.
Content from the Red Hat Ecosystem Catalog Container images registry, such as image streams and Operators, are not available.
There are two phases prior to OKD installation where you can customize the network configuration.
You can customize the following network-related fields in the install-config.yaml
file before you create the manifest files:
networking.networkType
networking.clusterNetwork
networking.serviceNetwork
networking.machineNetwork
For more information on these fields, refer to Installation configuration parameters.
Set the |
The CIDR range |
After creating the manifest files by running openshift-install create manifests
, you can define a customized Cluster Network Operator manifest with only the fields you want to modify. You can use the manifest to specify advanced network configuration.
You cannot override the values specified in phase 1 in the install-config.yaml
file during phase 2. However, you can further customize the network plugin during phase 2.
You can customize the OKD cluster you install on
You have the OKD installation program and the pull secret for your cluster.
Create the install-config.yaml
file.
Change to the directory that contains the installation program and run the following command:
$ ./openshift-install create install-config --dir <installation_directory> (1)
1 | For <installation_directory> , specify the directory name to store the
files that the installation program creates. |
When specifying the directory:
Verify that the directory has the execute
permission. This permission is required to run Terraform binaries under the installation directory.
Use an empty directory. Some installation assets, such as bootstrap X.509 certificates, have short expiration intervals, therefore you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OKD version.
At the prompts, provide the configuration details for your cloud:
Optional: Select an SSH key to use to access your cluster machines.
For production OKD clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your |
Enter a descriptive name for your cluster.
Modify the install-config.yaml
file. You can find more information about the available parameters in the "Installation configuration parameters" section.
Back up the install-config.yaml
file so that you can use
it to install multiple clusters.
The |
You must generate the Kubernetes manifest and Ignition config files that the cluster needs to configure the machines.
Generate the manifests by running the following command from the directory that contains the installation program:
$ openshift-install create manifests --dir <installation_directory>
where:
<installation_directory>
Specifies the directory in which the installation program creates files.
By default, |
You must have:
Extracted and prepared the ccoctl
binary.
Set a $RELEASE_IMAGE
variable with the release image from your installation file by running the following command:
$ RELEASE_IMAGE=$(./openshift-install version | awk '/release image/ {print $3}')
Extract the list of CredentialsRequest
objects from the OKD release image by running the following command:
$ oc adm release extract \
--from=$RELEASE_IMAGE \
--credentials-requests \
--included \(1)
--install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \(2)
--to=<path_to_directory_for_credentials_requests> (3)
1 | The --included parameter includes only the manifests that your specific cluster configuration requires. |
2 | Specify the location of the install-config.yaml file. |
3 | Specify the path to the directory where you want to store the CredentialsRequest objects. If the specified directory does not exist, this command creates it. |
This command might take a few moments to run. |
You can customize the installation configuration file (install-config.yaml
) to specify more details about
your cluster’s platform or modify the values of the required
parameters.
apiVersion: v1
baseDomain: alicloud-dev.devcluster.openshift.com
credentialsMode: Manual
compute:
- architecture: amd64
hyperthreading: Enabled
name: worker
platform: {}
replicas: 3
controlPlane:
architecture: amd64
hyperthreading: Enabled
name: master
platform: {}
replicas: 3
metadata:
creationTimestamp: null
name: test-cluster (1)
networking:
clusterNetwork:
- cidr: 10.128.0.0/14
hostPrefix: 23
machineNetwork:
- cidr: 10.0.0.0/16
networkType: OVNKubernetes (2)
serviceNetwork:
- 172.30.0.0/16
platform:
alibabacloud:
defaultMachinePlatform: (3)
instanceType: ecs.g6.xlarge
systemDiskCategory: cloud_efficiency
systemDiskSize: 200
region: ap-southeast-1 (4)
resourceGroupID: rg-acfnw6j3hyai (5)
vpcID: vpc-0xifdjerdibmaqvtjob2b (8)
vswitchIDs: (8)
- vsw-0xi8ycgwc8wv5rhviwdq5
- vsw-0xiy6v3z2tedv009b4pz2
publish: External
pullSecret: '{"auths": {"cloud.openshift.com": {"auth": ... }' (6)
sshKey: |
ssh-rsa AAAA... (7)
1 | Required. The installation program prompts you for a cluster name. |
2 | The cluster network plugin to install. The default value OVNKubernetes is the only supported value. |
3 | Optional. Specify parameters for machine pools that do not define their own platform configuration. |
4 | Required. The installation program prompts you for the region to deploy the cluster to. |
5 | Optional. Specify an existing resource group where the cluster should be installed. |
6 | Required. The installation program prompts you for the pull secret. |
7 | Optional. The installation program prompts you for the SSH key value that you use to access the machines in your cluster. |
8 | Optional. These are example vswitchID values. |
Production environments can deny direct access to the internet and instead have
an HTTP or HTTPS proxy available. You can configure a new OKD
cluster to use a proxy by configuring the proxy settings in the
install-config.yaml
file.
You have an existing install-config.yaml
file.
You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the Proxy
object’s spec.noProxy
field to bypass the proxy if necessary.
The For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and OpenStack, the |
Edit your install-config.yaml
file and add the proxy settings. For example:
apiVersion: v1
baseDomain: my.domain.com
proxy:
httpProxy: http://<username>:<pswd>@<ip>:<port> (1)
httpsProxy: https://<username>:<pswd>@<ip>:<port> (2)
noProxy: example.com (3)
additionalTrustBundle: | (4)
-----BEGIN CERTIFICATE-----
<MY_TRUSTED_CA_CERT>
-----END CERTIFICATE-----
additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> (5)
1 | A proxy URL to use for creating HTTP connections outside the cluster. The
URL scheme must be http . |
2 | A proxy URL to use for creating HTTPS connections outside the cluster. |
3 | A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass the proxy for all destinations. |
4 | If provided, the installation program generates a config map that is named user-ca-bundle in
the openshift-config namespace that contains one or more additional CA
certificates that are required for proxying HTTPS connections. The Cluster Network
Operator then creates a trusted-ca-bundle config map that merges these contents
with the Fedora CoreOS (FCOS) trust bundle, and this config map is referenced in the trustedCA field of the Proxy object. The additionalTrustBundle field is required unless
the proxy’s identity certificate is signed by an authority from the FCOS trust
bundle. |
5 | Optional: The policy to determine the configuration of the Proxy object to reference the user-ca-bundle config map in the trustedCA field. The allowed values are Proxyonly and Always . Use Proxyonly to reference the user-ca-bundle config map only when http/https proxy is configured. Use Always to always reference the user-ca-bundle config map. The default value is Proxyonly . |
The installation program does not support the proxy |
If the installer times out, restart and then complete the deployment by using the
|
Save the file and reference it when installing OKD.
The installation program creates a cluster-wide proxy that is named cluster
that uses the proxy
settings in the provided install-config.yaml
file. If no proxy settings are
provided, a cluster
Proxy
object is still created, but it will have a nil
spec
.
Only the |
The configuration for the cluster network is specified as part of the Cluster Network Operator (CNO) configuration and stored in a custom resource (CR) object that is named cluster
. The CR specifies the fields for the Network
API in the operator.openshift.io
API group.
The CNO configuration inherits the following fields during cluster installation from the Network
API in the Network.config.openshift.io
API group:
clusterNetwork
IP address pools from which pod IP addresses are allocated.
serviceNetwork
IP address pool for services.
defaultNetwork.type
Cluster network plugin. OVNKubernetes
is the only supported plugin during installation.
You can specify the cluster network plugin configuration for your cluster by setting the fields for the defaultNetwork
object in the CNO object named cluster
.
The fields for the Cluster Network Operator (CNO) are described in the following table:
Field | Type | Description |
---|---|---|
|
|
The name of the CNO object. This name is always |
|
|
A list specifying the blocks of IP addresses from which pod IP addresses are allocated and the subnet prefix length assigned to each individual node in the cluster. For example:
|
|
|
A block of IP addresses for services. The OpenShift SDN and OVN-Kubernetes network plugins support only a single IP address block for the service network. For example:
You can customize this field only in the |
|
|
Configures the network plugin for the cluster network. |
|
|
The fields for this object specify the kube-proxy configuration. If you are using the OVN-Kubernetes cluster network plugin, the kube-proxy configuration has no effect. |
The values for the defaultNetwork
object are defined in the following table:
Field | Type | Description | ||
---|---|---|---|---|
|
|
|
||
|
|
This object is only valid for the OVN-Kubernetes network plugin. |
The following table describes the configuration fields for the OVN-Kubernetes network plugin:
Field | Type | Description | ||
---|---|---|---|---|
|
|
The maximum transmission unit (MTU) for the Geneve (Generic Network Virtualization Encapsulation) overlay network. This is detected automatically based on the MTU of the primary network interface. You do not normally need to override the detected MTU. If the auto-detected value is not what you expect it to be, confirm that the MTU on the primary network interface on your nodes is correct. You cannot use this option to change the MTU value of the primary network interface on the nodes. If your cluster requires different MTU values for different nodes, you must set this value to |
||
|
|
The port to use for all Geneve packets. The default value is |
||
|
|
Specify a configuration object for customizing the IPsec configuration. |
||
|
|
Specify a configuration object for customizing network policy audit logging. If unset, the defaults audit log settings are used. |
||
|
|
Optional: Specify a configuration object for customizing how egress traffic is sent to the node gateway.
|
||
|
If your existing network infrastructure overlaps with the This field cannot be changed after installation. |
The default value is |
||
|
If your existing network infrastructure overlaps with the This field cannot be changed after installation. |
The default value is |
Field | Type | Description |
---|---|---|
|
integer |
The maximum number of messages to generate every second per node. The default value is |
|
integer |
The maximum size for the audit log in bytes. The default value is |
|
integer |
The maximum number of log files that are retained. |
|
string |
One of the following additional audit log targets:
|
|
string |
The syslog facility, such as |
Field | Type | Description |
---|---|---|
|
|
Set this field to This field has an interaction with the Open vSwitch hardware offloading feature.
If you set this field to |
|
|
You can control IP forwarding for all traffic on OVN-Kubernetes managed interfaces by using the |
Field | Type | Description |
---|---|---|
|
|
Specifies the behavior of the IPsec implementation. Must be one of the following values:
|
defaultNetwork:
type: OVNKubernetes
ovnKubernetesConfig:
mtu: 1400
genevePort: 6081
ipsecConfig:
mode: Full
Using OVNKubernetes can lead to a stack exhaustion problem on IBM Power®. |
The values for the kubeProxyConfig
object are defined in the following table:
Field | Type | Description | ||
---|---|---|---|---|
|
|
The refresh period for
|
||
|
|
The minimum duration before refreshing
|
You can use advanced network configuration for your network plugin to integrate your cluster into your existing network environment. You can specify advanced network configuration only before you install the cluster.
Customizing your network configuration by modifying the OKD manifest files created by the installation program is not supported. Applying a manifest file that you create, as in the following procedure, is supported. |
You have created the install-config.yaml
file and completed any modifications to it.
Change to the directory that contains the installation program and create the manifests:
$ ./openshift-install create manifests --dir <installation_directory> (1)
1 | <installation_directory> specifies the name of the directory that contains the install-config.yaml file for your cluster. |
Create a stub manifest file for the advanced network configuration that is named cluster-network-03-config.yml
in the <installation_directory>/manifests/
directory:
apiVersion: operator.openshift.io/v1
kind: Network
metadata:
name: cluster
spec:
Specify the advanced network configuration for your cluster in the cluster-network-03-config.yml
file, such as in the following example:
apiVersion: operator.openshift.io/v1
kind: Network
metadata:
name: cluster
spec:
defaultNetwork:
ovnKubernetesConfig:
ipsecConfig:
mode: Full
Optional: Back up the manifests/cluster-network-03-config.yml
file. The
installation program consumes the manifests/
directory when you create the
Ignition config files.
You can configure your cluster to use hybrid networking with the OVN-Kubernetes network plugin. This allows a hybrid cluster that supports different node networking configurations.
This configuration is necessary to run both Linux and Windows nodes in the same cluster. |
You defined OVNKubernetes
for the networking.networkType
parameter in the install-config.yaml
file. See the installation documentation for configuring OKD network customizations on your chosen cloud provider for more information.
Change to the directory that contains the installation program and create the manifests:
$ ./openshift-install create manifests --dir <installation_directory>
where:
<installation_directory>
Specifies the name of the directory that contains the install-config.yaml
file for your cluster.
Create a stub manifest file for the advanced network configuration that is named cluster-network-03-config.yml
in the <installation_directory>/manifests/
directory:
$ cat <<EOF > <installation_directory>/manifests/cluster-network-03-config.yml
apiVersion: operator.openshift.io/v1
kind: Network
metadata:
name: cluster
spec:
EOF
where:
<installation_directory>
Specifies the directory name that contains the
manifests/
directory for your cluster.
Open the cluster-network-03-config.yml
file in an editor and configure OVN-Kubernetes with hybrid networking, as in the following example:
apiVersion: operator.openshift.io/v1
kind: Network
metadata:
name: cluster
spec:
defaultNetwork:
ovnKubernetesConfig:
hybridOverlayConfig:
hybridClusterNetwork: (1)
- cidr: 10.132.0.0/14
hostPrefix: 23
hybridOverlayVXLANPort: 9898 (2)
1 | Specify the CIDR configuration used for nodes on the additional overlay network. The hybridClusterNetwork CIDR must not overlap with the clusterNetwork CIDR. |
2 | Specify a custom VXLAN port for the additional overlay network. This is required for running Windows nodes in a cluster installed on vSphere, and must not be configured for any other cloud provider. The custom port can be any open port excluding the default 4789 port. For more information on this requirement, see the Microsoft documentation on Pod-to-pod connectivity between hosts is broken. |
Windows Server Long-Term Servicing Channel (LTSC): Windows Server 2019 is not supported on clusters with a custom |
Save the cluster-network-03-config.yml
file and quit the text editor.
Optional: Back up the manifests/cluster-network-03-config.yml
file. The
installation program deletes the manifests/
directory when creating the
cluster.
You can install OKD on a compatible cloud platform.
You can run the |
You have configured an account with the cloud platform that hosts your cluster.
You have the OKD installation program and the pull secret for your cluster.
You have verified that the cloud provider account on your host has the correct permissions to deploy the cluster. An account with incorrect permissions causes the installation process to fail with an error message that displays the missing permissions.
Change to the directory that contains the installation program and initialize the cluster deployment:
$ ./openshift-install create cluster --dir <installation_directory> \ (1)
--log-level=info (2)
1 | For <installation_directory> , specify the
location of your customized ./install-config.yaml file. |
2 | To view different installation details, specify warn , debug , or
error instead of info . |
When the cluster deployment completes successfully:
The terminal displays directions for accessing your cluster, including a link to the web console and credentials for the kubeadmin
user.
Credential information also outputs to <installation_directory>/.openshift_install.log
.
Do not delete the installation program or the files that the installation program creates. Both are required to delete the cluster. |
...
INFO Install complete!
INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig'
INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com
INFO Login to the console with user: "kubeadmin", and password: "password"
INFO Time elapsed: 36m22s
|
You can install the OpenShift cli (oc
) to interact with
OKD
from a command-line interface. You can install oc
on Linux, Windows, or macOS.
If you installed an earlier version of |
You can install the OpenShift cli (oc
) binary on Linux by using the following procedure.
Navigate to https://mirror.openshift.com/pub/openshift-v4/clients/oc/latest/ and choose the folder for your operating system and architecture.
Download oc.tar.gz
.
Unpack the archive:
$ tar xvf <file>
Place the oc
binary in a directory that is on your PATH
.
To check your PATH
, execute the following command:
$ echo $PATH
After you install the OpenShift cli, it is available using the oc
command:
$ oc <command>
You can install the OpenShift cli (oc
) binary on Windows by using the following procedure.
Navigate to https://mirror.openshift.com/pub/openshift-v4/clients/oc/latest/ and choose the folder for your operating system and architecture.
Download oc.zip
.
Unzip the archive with a ZIP program.
Move the oc
binary to a directory that is on your PATH
.
To check your PATH
, open the command prompt and execute the following command:
C:\> path
After you install the OpenShift cli, it is available using the oc
command:
C:\> oc <command>
You can install the OpenShift cli (oc
) binary on macOS by using the following procedure.
Navigate to https://mirror.openshift.com/pub/openshift-v4/clients/oc/latest/ and choose the folder for your operating system and architecture.
Download oc.tar.gz
.
Unpack and unzip the archive.
Move the oc
binary to a directory on your PATH.
To check your PATH
, open a terminal and execute the following command:
$ echo $PATH
Verify your installation by using an oc
command:
$ oc <command>
You can log in to your cluster as a default system user by exporting the cluster kubeconfig
file.
The kubeconfig
file contains information about the cluster that is used by the cli to connect a client to the correct cluster and API server.
The file is specific to a cluster and is created during OKD installation.
You deployed an OKD cluster.
You installed the oc
cli.
Export the kubeadmin
credentials:
$ export KUBECONFIG=<installation_directory>/auth/kubeconfig (1)
1 | For <installation_directory> , specify the path to the directory that you stored
the installation files in. |
Verify you can run oc
commands successfully using the exported configuration:
$ oc whoami
system:admin
The kubeadmin
user exists by default after an OKD installation. You can log in to your cluster as the kubeadmin
user by using the OKD web console.
You have access to the installation host.
You completed a cluster installation and all cluster Operators are available.
Obtain the password for the kubeadmin
user from the kubeadmin-password
file on the installation host:
$ cat <installation_directory>/auth/kubeadmin-password
Alternatively, you can obtain the |
List the OKD web console route:
$ oc get routes -n openshift-console | grep 'console-openshift'
Alternatively, you can obtain the OKD route from the |
console console-openshift-console.apps.<cluster_name>.<base_domain> console https reencrypt/Redirect None
Navigate to the route detailed in the output of the preceding command in a web browser and log in as the kubeadmin
user.
See About remote health monitoring for more information about the Telemetry service.
See Accessing the web console for more details about accessing and understanding the OKD web console
See Accessing the web console for more details about accessing and understanding the OKD web console.
If necessary, you can opt out of remote health reporting.