$ export CLUSTER_REGION="<region_name>" (1)
In OpenShift Container Platform version 4.12, you can install a cluster on Amazon Web Services (AWS) into an existing VPC, extending workers to the edge of the Cloud Infrastructure using AWS Local Zones.
After you create an Amazon Web Service (AWS) Local Zone environment, and you deploy your cluster, you can use edge worker nodes to create user workloads in Local Zone subnets.
AWS Local Zones are a type of infrastructure that place Cloud Resources close to the metropolitan regions. For more information, see the AWS Local Zones Documentation.
OpenShift Container Platform can be installed in existing VPCs with Local Zone subnets. The Local Zone subnets can be used to extend the regular workers' nodes to the edge networks. The edge worker nodes are dedicated to running user workloads.
One way to create the VPC and subnets is to use the provided CloudFormation templates. You can modify the templates to customize your infrastructure or use the information that they contain to create AWS objects according to your company’s policies.
The steps for performing an installer-provisioned infrastructure installation are provided as an example only. Installing a cluster with VPC you provide requires knowledge of the cloud provider and the installation process of OpenShift Container Platform. The CloudFormation templates are provided to assist in completing these steps or to help model your own. You are also free to create the required resources through other methods; the templates are just an example. |
You reviewed details about the OpenShift Container Platform installation and update processes.
You read the documentation on selecting a cluster installation method and preparing it for users.
You configured an AWS account to host the cluster.
If you have an AWS profile stored on your computer, it must not use a temporary session token that you generated while using a multi-factor authentication device. The cluster continues to use your current AWS credentials to create AWS resources for the entire life of the cluster, so you must use key-based, long-lived credentials. To generate appropriate keys, see Managing Access Keys for IAM Users in the AWS documentation. You can supply the keys when you run the installation program. |
You noted the region and supported AWS Local Zones locations to create the network resources in.
You read the Features for each AWS Local Zones location.
You downloaded the AWS CLI and installed it on your computer. See Install the AWS CLI Using the Bundled Installer (Linux, macOS, or UNIX) in the AWS documentation.
If you use a firewall, you configured it to allow the sites that your cluster requires access to.
Be sure to also review this site list if you are configuring a proxy. |
If the cloud identity and access management (IAM) APIs are not accessible in your environment, or if you do not want to store an administrator-level credential secret in the kube-system
namespace, you can manually create and maintain IAM credentials.
Some limitations exist when you attempt to deploy a cluster with a default installation configuration in Amazon Web Services (AWS) Local Zones.
The following list details limitations when deploying a cluster in AWS Local Zones:
|
In OpenShift Container Platform 4.12, you require access to the internet to install your cluster.
You must have internet access to:
Access OpenShift Cluster Manager Hybrid Cloud Console to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster.
Access Quay.io to obtain the packages that are required to install your cluster.
Obtain the packages that are required to perform cluster updates.
If your cluster cannot have direct internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the required content and use it to populate a mirror registry with the installation packages. With some installation types, the environment that you install your cluster in will not require internet access. Before you update the cluster, you update the content of the mirror registry. |
If you plan to create the subnets in AWS Local Zones, you must opt in to each zone group separately.
You have installed the AWS CLI.
You have determined into which region you will deploy your OpenShift Container Platform cluster.
Export a variable to contain the name of the region in which you plan to deploy your OpenShift Container Platform cluster by running the following command:
$ export CLUSTER_REGION="<region_name>" (1)
1 | For <region_name> , specify a valid AWS region name, such as us-east-1 . |
List the zones that are available in your region by running the following command:
$ aws --region ${CLUSTER_REGION} ec2 describe-availability-zones \
--query 'AvailabilityZones[].[{ZoneName: ZoneName, GroupName: GroupName, Status: OptInStatus}]' \
--filters Name=zone-type,Values=local-zone \
--all-availability-zones
Depending on the region, the list of available zones can be long. The command will return the following fields:
ZoneName
The name of the Local Zone.
GroupName
The group that the zone is part of. You need to save this name to opt in.
Status
The status of the Local Zone group. If the status is not-opted-in
, you must opt in the GroupName
by running the commands that follow.
Export a variable to contain the name of the Local Zone to host your VPC by running the following command:
$ export ZONE_GROUP_NAME="<value_of_GroupName>" (1)
1 | The <value_of_GroupName> specifies the name of the group of the Local Zone you want to create subnets on. For example, specify us-east-1-nyc-1 to use the zone us-east-1-nyc-1a , US East (New York). |
Opt in to the zone group on your AWS account by running the following command:
$ aws ec2 modify-availability-zone-group \
--group-name "${ZONE_GROUP_NAME}" \
--opt-in-status opted-in
If you are deploying an OpenShift Container Platform cluster using an AWS Marketplace image, you must first subscribe through AWS. Subscribing to the offer provides you with the AMI ID that the installation program uses to deploy worker nodes.
You have an AWS account to purchase the offer. This account does not have to be the same account that is used to install the cluster.
Complete the OpenShift Container Platform subscription from the AWS Marketplace.
Record the AMI ID for your specific region. As part of the installation process, you must update the install-config.yaml
file with this value before deploying the cluster.
install-config.yaml
file with AWS Marketplace worker nodesapiVersion: v1
baseDomain: example.com
compute:
- hyperthreading: Enabled
name: worker
platform:
aws:
amiID: ami-06c4d345f7c207239 (1)
type: m5.4xlarge
replicas: 3
metadata:
name: test-cluster
platform:
aws:
region: us-east-2 (2)
sshKey: ssh-ed25519 AAAA...
pullSecret: '{"auths": ...}'
1 | The AMI ID from your AWS Marketplace subscription. |
2 | Your AMI ID is associated with a specific AWS region. When creating the installation configuration file, ensure that you select the same AWS region that you specified when configuring your subscription. |
You must create a Virtual Private Cloud (VPC), and subnets for each Local Zone location, in Amazon Web Services (AWS) for your OpenShift Container Platform cluster to extend worker nodes to the edge locations. You can further customize the VPC to meet your requirements, including VPN, route tables, and add new Local Zone subnets that are not included at initial deployment.
You can use the provided CloudFormation template and a custom parameter file to create a stack of AWS resources that represent the VPC.
If you do not use the provided CloudFormation template to create your AWS infrastructure, you must review the provided information and manually create the infrastructure. If your cluster does not initialize correctly, you might have to contact Red Hat support with your installation logs. |
You configured an AWS account.
You added your AWS keys and region to your local AWS profile by running aws configure
.
You opted in to the AWS Local Zones on your AWS account.
Create a JSON file that contains the parameter values that the template requires:
[
{
"ParameterKey": "ClusterName", (1)
"ParameterValue": "mycluster" (2)
},
{
"ParameterKey": "VpcCidr", (3)
"ParameterValue": "10.0.0.0/16" (4)
},
{
"ParameterKey": "AvailabilityZoneCount", (5)
"ParameterValue": "3" (6)
},
{
"ParameterKey": "SubnetBits", (7)
"ParameterValue": "12" (8)
}
]
1 | A short, representative cluster name to use for hostnames, etc. |
2 | Specify the cluster name that you used when you generated the
install-config.yaml file for the cluster. |
3 | The CIDR block for the VPC. |
4 | Specify a CIDR block in the format x.x.x.x/16-24 . |
5 | The number of availability zones to deploy the VPC in. |
6 | Specify an integer between 1 and 3 . |
7 | The size of each subnet in each availability zone. |
8 | Specify an integer between 5 and 13 , where 5 is /27 and 13 is /19 . |
Copy the template from the CloudFormation template for the VPC section of this topic and save it as a YAML file on your computer. This template describes the VPC that your cluster requires.
Launch the CloudFormation template to create a stack of AWS resources that represent the VPC by running the following command:
You must enter the command on a single line. |
$ aws cloudformation create-stack --stack-name <name> \ (1)
--template-body file://<template>.yaml \ (2)
--parameters file://<parameters>.json (3)
1 | <name> is the name for the CloudFormation stack, such as cluster-vpc .
You need the name of this stack if you remove the cluster. |
2 | <template> is the relative path to and name of the CloudFormation template
YAML file that you saved. |
3 | <parameters> is the relative path to and name of the CloudFormation
parameters JSON file. |
arn:aws:cloudformation:us-east-1:123456789012:stack/cluster-vpc/dbedae40-2fd3-11eb-820e-12a48460849f
Confirm that the template components exist by running the following command:
$ aws cloudformation describe-stacks --stack-name <name>
After the StackStatus
displays CREATE_COMPLETE
, the output displays values
for the following parameters. You must provide these parameter values to
the other CloudFormation templates that you run to create your cluster:
VpcId
|
The ID of your VPC. |
PublicSubnetIds
|
The IDs of the new public subnets. |
PrivateSubnetIds
|
The IDs of the new private subnets. |
PublicRouteTableId
|
The ID of the new public route table ID. |
You can use the following CloudFormation template to deploy the VPC that you need for your OpenShift Container Platform cluster that uses AWS Local Zones.
AWSTemplateFormatVersion: 2010-09-09
Description: Template for Best Practice VPC with 1-3 AZs
Parameters:
ClusterName:
Type: String
Description: ClusterName used to prefix resource names
VpcCidr:
AllowedPattern: ^(([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])\.){3}([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])(\/(1[6-9]|2[0-4]))$
ConstraintDescription: CIDR block parameter must be in the form x.x.x.x/16-24.
Default: 10.0.0.0/16
Description: CIDR block for VPC.
Type: String
AvailabilityZoneCount:
ConstraintDescription: "The number of availability zones. (Min: 1, Max: 3)"
MinValue: 1
MaxValue: 3
Default: 1
Description: "How many AZs to create VPC subnets for. (Min: 1, Max: 3)"
Type: Number
SubnetBits:
ConstraintDescription: CIDR block parameter must be in the form x.x.x.x/19-27.
MinValue: 5
MaxValue: 13
Default: 12
Description: "Size of each subnet to create within the availability zones. (Min: 5 = /27, Max: 13 = /19)"
Type: Number
Metadata:
AWS::CloudFormation::Interface:
ParameterGroups:
- Label:
default: "Network Configuration"
Parameters:
- VpcCidr
- SubnetBits
- Label:
default: "Availability Zones"
Parameters:
- AvailabilityZoneCount
ParameterLabels:
ClusterName:
default: ""
AvailabilityZoneCount:
default: "Availability Zone Count"
VpcCidr:
default: "VPC CIDR"
SubnetBits:
default: "Bits Per Subnet"
Conditions:
DoAz3: !Equals [3, !Ref AvailabilityZoneCount]
DoAz2: !Or [!Equals [2, !Ref AvailabilityZoneCount], Condition: DoAz3]
Resources:
VPC:
Type: "AWS::EC2::VPC"
Properties:
EnableDnsSupport: "true"
EnableDnsHostnames: "true"
CidrBlock: !Ref VpcCidr
Tags:
- Key: Name
Value: !Join [ "", [ !Ref ClusterName, "-vpc" ] ]
- Key: !Join [ "", [ "kubernetes.io/cluster/unmanaged" ] ]
Value: "shared"
PublicSubnet:
Type: "AWS::EC2::Subnet"
Properties:
VpcId: !Ref VPC
CidrBlock: !Select [0, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]]
AvailabilityZone: !Select
- 0
- Fn::GetAZs: !Ref "AWS::Region"
Tags:
- Key: Name
Value: !Join [ "", [ !Ref ClusterName, "-public-1" ] ]
PublicSubnet2:
Type: "AWS::EC2::Subnet"
Condition: DoAz2
Properties:
VpcId: !Ref VPC
CidrBlock: !Select [1, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]]
AvailabilityZone: !Select
- 1
- Fn::GetAZs: !Ref "AWS::Region"
Tags:
- Key: Name
Value: !Join [ "", [ !Ref ClusterName, "-public-2" ] ]
PublicSubnet3:
Type: "AWS::EC2::Subnet"
Condition: DoAz3
Properties:
VpcId: !Ref VPC
CidrBlock: !Select [2, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]]
AvailabilityZone: !Select
- 2
- Fn::GetAZs: !Ref "AWS::Region"
Tags:
- Key: Name
Value: !Join [ "", [ !Ref ClusterName, "-public-3" ] ]
InternetGateway:
Type: "AWS::EC2::InternetGateway"
Properties:
Tags:
- Key: Name
Value: !Join [ "", [ !Ref ClusterName, "-igw" ] ]
GatewayToInternet:
Type: "AWS::EC2::VPCGatewayAttachment"
Properties:
VpcId: !Ref VPC
InternetGatewayId: !Ref InternetGateway
PublicRouteTable:
Type: "AWS::EC2::RouteTable"
Properties:
VpcId: !Ref VPC
Tags:
- Key: Name
Value: !Join [ "", [ !Ref ClusterName, "-rtb-public" ] ]
PublicRoute:
Type: "AWS::EC2::Route"
DependsOn: GatewayToInternet
Properties:
RouteTableId: !Ref PublicRouteTable
DestinationCidrBlock: 0.0.0.0/0
GatewayId: !Ref InternetGateway
PublicSubnetRouteTableAssociation:
Type: "AWS::EC2::SubnetRouteTableAssociation"
Properties:
SubnetId: !Ref PublicSubnet
RouteTableId: !Ref PublicRouteTable
PublicSubnetRouteTableAssociation2:
Type: "AWS::EC2::SubnetRouteTableAssociation"
Properties:
SubnetId: !Ref PublicSubnet2
RouteTableId: !Ref PublicRouteTable
PublicSubnetRouteTableAssociation3:
Type: "AWS::EC2::SubnetRouteTableAssociation"
Properties:
SubnetId: !Ref PublicSubnet3
RouteTableId: !Ref PublicRouteTable
PrivateSubnet:
Type: "AWS::EC2::Subnet"
Properties:
VpcId: !Ref VPC
CidrBlock: !Select [3, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]]
AvailabilityZone: !Select
- 0
- Fn::GetAZs: !Ref "AWS::Region"
Tags:
- Key: Name
Value: !Join [ "", [ !Ref ClusterName, "-private-1" ] ]
PrivateRouteTable:
Type: "AWS::EC2::RouteTable"
Properties:
VpcId: !Ref VPC
Tags:
- Key: Name
Value: !Join [ "", [ !Ref ClusterName, "-rtb-private-1" ] ]
PrivateSubnetRouteTableAssociation:
Type: "AWS::EC2::SubnetRouteTableAssociation"
Properties:
SubnetId: !Ref PrivateSubnet
RouteTableId: !Ref PrivateRouteTable
NAT:
DependsOn:
- GatewayToInternet
Type: "AWS::EC2::NatGateway"
Properties:
AllocationId:
"Fn::GetAtt":
- EIP
- AllocationId
SubnetId: !Ref PublicSubnet
Tags:
- Key: Name
Value: !Join [ "", [ !Ref ClusterName, "-natgw-private-1" ] ]
EIP:
Type: "AWS::EC2::EIP"
Properties:
Domain: vpc
Route:
Type: "AWS::EC2::Route"
Properties:
RouteTableId:
Ref: PrivateRouteTable
DestinationCidrBlock: 0.0.0.0/0
NatGatewayId:
Ref: NAT
PrivateSubnet2:
Type: "AWS::EC2::Subnet"
Condition: DoAz2
Properties:
VpcId: !Ref VPC
CidrBlock: !Select [4, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]]
AvailabilityZone: !Select
- 1
- Fn::GetAZs: !Ref "AWS::Region"
Tags:
- Key: Name
Value: !Join [ "", [ !Ref ClusterName, "-private-2" ] ]
PrivateRouteTable2:
Type: "AWS::EC2::RouteTable"
Condition: DoAz2
Properties:
VpcId: !Ref VPC
Tags:
- Key: Name
Value: !Join [ "", [ !Ref ClusterName, "-rtb-private-2" ] ]
PrivateSubnetRouteTableAssociation2:
Type: "AWS::EC2::SubnetRouteTableAssociation"
Condition: DoAz2
Properties:
SubnetId: !Ref PrivateSubnet2
RouteTableId: !Ref PrivateRouteTable2
NAT2:
DependsOn:
- GatewayToInternet
Type: "AWS::EC2::NatGateway"
Condition: DoAz2
Properties:
AllocationId:
"Fn::GetAtt":
- EIP2
- AllocationId
SubnetId: !Ref PublicSubnet2
Tags:
- Key: Name
Value: !Join [ "", [ !Ref ClusterName, "-natgw-private-2" ] ]
EIP2:
Type: "AWS::EC2::EIP"
Condition: DoAz2
Properties:
Domain: vpc
Tags:
- Key: Name
Value: !Join [ "", [ !Ref ClusterName, "-eip-private-2" ] ]
Route2:
Type: "AWS::EC2::Route"
Condition: DoAz2
Properties:
RouteTableId:
Ref: PrivateRouteTable2
DestinationCidrBlock: 0.0.0.0/0
NatGatewayId:
Ref: NAT2
PrivateSubnet3:
Type: "AWS::EC2::Subnet"
Condition: DoAz3
Properties:
VpcId: !Ref VPC
CidrBlock: !Select [5, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]]
AvailabilityZone: !Select
- 2
- Fn::GetAZs: !Ref "AWS::Region"
Tags:
- Key: Name
Value: !Join [ "", [ !Ref ClusterName, "-private-3" ] ]
PrivateRouteTable3:
Type: "AWS::EC2::RouteTable"
Condition: DoAz3
Properties:
VpcId: !Ref VPC
Tags:
- Key: Name
Value: !Join [ "", [ !Ref ClusterName, "-rtb-private-3" ] ]
PrivateSubnetRouteTableAssociation3:
Type: "AWS::EC2::SubnetRouteTableAssociation"
Condition: DoAz3
Properties:
SubnetId: !Ref PrivateSubnet3
RouteTableId: !Ref PrivateRouteTable3
NAT3:
DependsOn:
- GatewayToInternet
Type: "AWS::EC2::NatGateway"
Condition: DoAz3
Properties:
AllocationId:
"Fn::GetAtt":
- EIP3
- AllocationId
SubnetId: !Ref PublicSubnet3
Tags:
- Key: Name
Value: !Join [ "", [ !Ref ClusterName, "-natgw-private-3" ] ]
EIP3:
Type: "AWS::EC2::EIP"
Condition: DoAz3
Properties:
Domain: vpc
Tags:
- Key: Name
Value: !Join [ "", [ !Ref ClusterName, "-eip-private-3" ] ]
Route3:
Type: "AWS::EC2::Route"
Condition: DoAz3
Properties:
RouteTableId:
Ref: PrivateRouteTable3
DestinationCidrBlock: 0.0.0.0/0
NatGatewayId:
Ref: NAT3
S3Endpoint:
Type: AWS::EC2::VPCEndpoint
Properties:
PolicyDocument:
Version: 2012-10-17
Statement:
- Effect: Allow
Principal: '*'
Action:
- '*'
Resource:
- '*'
RouteTableIds:
- !Ref PublicRouteTable
- !Ref PrivateRouteTable
- !If [DoAz2, !Ref PrivateRouteTable2, !Ref "AWS::NoValue"]
- !If [DoAz3, !Ref PrivateRouteTable3, !Ref "AWS::NoValue"]
ServiceName: !Join
- ''
- - com.amazonaws.
- !Ref 'AWS::Region'
- .s3
VpcId: !Ref VPC
Outputs:
VpcId:
Description: ID of the new VPC.
Value: !Ref VPC
PublicSubnetIds:
Description: Subnet IDs of the public subnets.
Value:
!Join [
",",
[!Ref PublicSubnet, !If [DoAz2, !Ref PublicSubnet2, !Ref "AWS::NoValue"], !If [DoAz3, !Ref PublicSubnet3, !Ref "AWS::NoValue"]]
]
PrivateSubnetIds:
Description: Subnet IDs of the private subnets.
Value:
!Join [
",",
[!Ref PrivateSubnet, !If [DoAz2, !Ref PrivateSubnet2, !Ref "AWS::NoValue"], !If [DoAz3, !Ref PrivateSubnet3, !Ref "AWS::NoValue"]]
]
PublicRouteTableId:
Description: Public Route table ID
Value: !Ref PublicRouteTable
PrivateRouteTableId:
Description: Private Route table ID
Value: !Ref PrivateRouteTable
You must create a subnet in AWS Local Zones before you configure a worker machineset for your OpenShift Container Platform cluster.
You must repeat the following process for each Local Zone you want to deploy worker nodes to.
You can use the provided CloudFormation template and a custom parameter file to create a stack of AWS resources that represent the subnet.
If you do not use the provided CloudFormation template to create your AWS infrastructure, you must review the provided information and manually create the infrastructure. If your cluster does not initialize correctly, you might have to contact Red Hat support with your installation logs. |
You configured an AWS account.
You added your AWS keys and region to your local AWS profile by running aws configure
.
You opted in to the Local Zone group.
Create a JSON file that contains the parameter values that the template requires:
[
{
"ParameterKey": "ClusterName", (1)
"ParameterValue": "mycluster" (2)
},
{
"ParameterKey": "VpcId", (3)
"ParameterValue": "vpc-<random_string>" (4)
},
{
"ParameterKey": "PublicRouteTableId", (5)
"ParameterValue": "<vpc_rtb_pub>" (6)
},
{
"ParameterKey": "LocalZoneName", (7)
"ParameterValue": "<cluster_region_name>-<location_identifier>-<zone_identifier>" (8)
},
{
"ParameterKey": "LocalZoneNameShort", (9)
"ParameterValue": "<lz_zone_shortname>" (10)
},
{
"ParameterKey": "PublicSubnetCidr", (11)
"ParameterValue": "10.0.128.0/20" (12)
}
]
1 | A short, representative cluster name to use for hostnames, etc. |
2 | Specify the cluster name that you used when you generated the install-config.yaml file for the cluster. |
3 | The VPC ID in which the Local Zone’s subnet will be created. |
4 | Specify the VpcId value from the output of the CloudFormation template
for the VPC. |
5 | The Public Route Table ID for the VPC. |
6 | Specify the PublicRouteTableId value from the output of the CloudFormation template for the VPC. |
7 | The Local Zone name that the VPC belongs to. |
8 | Specify the Local Zone that you opted your AWS account into, such as us-east-1-nyc-1a . |
9 | The shortname of the AWS Local Zone that the VPC belongs to. |
10 | Specify a short name for the AWS Local Zone that you opted your AWS account into, such as <zone_group_identified><zone_identifier> . For example, us-east-1-nyc-1a is shortened to nyc-1a . |
11 | The CIDR block to allow access to the Local Zone. |
12 | Specify a CIDR block in the format x.x.x.x/16-24 . |
Copy the template from the CloudFormation template for the subnet section of this topic and save it as a YAML file on your computer. This template describes the VPC that your cluster requires.
Launch the CloudFormation template to create a stack of AWS resources that represent the VPC by running the following command:
You must enter the command on a single line. |
$ aws cloudformation create-stack --stack-name <subnet_stack_name> \ (1)
--template-body file://<template>.yaml \ (2)
--parameters file://<parameters>.json (3)
1 | <subnet_stack_name> is the name for the CloudFormation stack, such as cluster-lz-<local_zone_shortname> .
You need the name of this stack if you remove the cluster. |
2 | <template> is the relative path to and name of the CloudFormation template
YAML file that you saved. |
3 | <parameters> is the relative path to and name of the CloudFormation
parameters JSON file. |
arn:aws:cloudformation:us-east-1:123456789012:stack/cluster-lz-nyc1/dbedae40-2fd3-11eb-820e-12a48460849f
Confirm that the template components exist by running the following command:
$ aws cloudformation describe-stacks --stack-name <subnet_stack_name>
After the StackStatus
displays CREATE_COMPLETE
, the output displays values
for the following parameters. You must provide these parameter values to
the other CloudFormation templates that you run to create your cluster:
PublicSubnetIds
|
The IDs of the new public subnets. |
You can use the following CloudFormation template to deploy the subnet that you need for your OpenShift Container Platform cluster that uses AWS Local Zones.
# CloudFormation template used to create Local Zone subnets and dependencies
AWSTemplateFormatVersion: 2010-09-09
Description: Template for Best Practice VPC with 1-3 AZs
Parameters:
ClusterName:
Description: ClusterName used to prefix resource names
Type: String
VpcId:
Description: VPC Id
Type: String
LocalZoneName:
Description: Local Zone Name (Example us-east-1-bos-1)
Type: String
LocalZoneNameShort:
Description: Short name for Local Zone used on tag Name (Example bos1)
Type: String
PublicRouteTableId:
Description: Public Route Table ID to associate the Local Zone subnet
Type: String
PublicSubnetCidr:
AllowedPattern: ^(([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])\.){3}([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])(\/(1[6-9]|2[0-4]))$
ConstraintDescription: CIDR block parameter must be in the form x.x.x.x/16-24.
Default: 10.0.128.0/20
Description: CIDR block for Public Subnet
Type: String
Resources:
PublicSubnet:
Type: "AWS::EC2::Subnet"
Properties:
VpcId: !Ref VpcId
CidrBlock: !Ref PublicSubnetCidr
AvailabilityZone: !Ref LocalZoneName
Tags:
- Key: Name
Value: !Join
- ""
- [ !Ref ClusterName, "-public-", !Ref LocalZoneNameShort, "-1" ]
- Key: kubernetes.io/cluster/unmanaged
Value: "true"
PublicSubnetRouteTableAssociation:
Type: "AWS::EC2::SubnetRouteTableAssociation"
Properties:
SubnetId: !Ref PublicSubnet
RouteTableId: !Ref PublicRouteTableId
Outputs:
PublicSubnetIds:
Description: Subnet IDs of the public subnets.
Value:
!Join [
"",
[!Ref PublicSubnet]
]
You can view details about the CloudFormation stacks that you create by navigating to the AWS CloudFormation console.
Before you install OpenShift Container Platform, download the installation file on the host you are using for installation.
You have a computer that runs Linux or macOS, with 500 MB of local disk space.
Access the Infrastructure Provider page on the OpenShift Cluster Manager site. If you have a Red Hat account, log in with your credentials. If you do not, create an account.
Select your infrastructure provider.
Navigate to the page for your installation type, download the installation program that corresponds with your host operating system and architecture, and place the file in the directory where you will store the installation configuration files.
The installation program creates several files on the computer that you use to install your cluster. You must keep the installation program and the files that the installation program creates after you finish installing the cluster. Both files are required to delete the cluster. |
Deleting the files created by the installation program does not remove your cluster, even if the cluster failed during installation. To remove your cluster, complete the OpenShift Container Platform uninstallation procedures for your specific cloud provider. |
Extract the installation program. For example, on a computer that uses a Linux operating system, run the following command:
$ tar -xvf openshift-install-linux.tar.gz
Download your installation pull secret from the Red Hat OpenShift Cluster Manager. This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components.
During an OpenShift Container Platform installation, you can provide an SSH public key to the installation program. The key is passed to the Red Hat Enterprise Linux CoreOS (RHCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys
list for the core
user on each node, which enables password-less authentication.
After the key is passed to the nodes, you can use the key pair to SSH in to the RHCOS nodes as the user core
. To access the nodes through SSH, the private key identity must be managed by SSH for your local user.
If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather
command also requires the SSH public key to be in place on the cluster nodes.
Do not skip this procedure in production environments, where disaster recovery and debugging is required. |
You must use a local key, not one that you configured with platform-specific approaches such as AWS key pairs. |
If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command:
$ ssh-keygen -t ed25519 -N '' -f <path>/<file_name> (1)
1 | Specify the path and file name, such as ~/.ssh/id_ed25519 , of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory. |
If you plan to install an OpenShift Container Platform cluster that uses fips validated or Modules In Process cryptographic libraries on the |
View the public SSH key:
$ cat <path>/<file_name>.pub
For example, run the following to view the ~/.ssh/id_ed25519.pub
public key:
$ cat ~/.ssh/id_ed25519.pub
Add the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the ./openshift-install gather
command.
On some distributions, default SSH private key identities such as |
If the ssh-agent
process is not already running for your local user, start it as a background task:
$ eval "$(ssh-agent -s)"
Agent pid 31874
If your cluster is in fips mode, only use fips-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA. |
Add your SSH private key to the ssh-agent
:
$ ssh-add <path>/<file_name> (1)
1 | Specify the path and file name for your SSH private key, such as ~/.ssh/id_ed25519 |
Identity added: /home/<you>/<path>/<file_name> (<computer_name>)
When you install OpenShift Container Platform, provide the SSH public key to the installation program.
To install OpenShift Container Platform on Amazon Web Services (AWS) and use AWS Local Zones, you must generate the files that the installation program needs to deploy your cluster and modify them so that the cluster creates only the machines that it will use. You generate and customize the install-config.yaml
file and Kubernetes manifests.
Each cluster machine must meet the following minimum requirements:
Machine | Operating System | vCPU [1] | Virtual RAM | Storage | Input/Output Per Second (IOPS)[2] |
---|---|---|---|---|---|
Bootstrap |
RHCOS |
4 |
16 GB |
100 GB |
300 |
Control plane |
RHCOS |
4 |
16 GB |
100 GB |
300 |
Compute |
RHCOS, RHEL 8.6 and later [3] |
2 |
8 GB |
100 GB |
300 |
One vCPU is equivalent to one physical core when simultaneous multithreading (SMT), or hyperthreading, is not enabled. When enabled, use the following formula to calculate the corresponding ratio: (threads per core × cores) × sockets = vCPUs.
OpenShift Container Platform and Kubernetes are sensitive to disk performance, and faster storage is recommended, particularly for etcd on the control plane nodes which require a 10 ms p99 fsync duration. Note that on many cloud platforms, storage size and IOPS scale together, so you might need to over-allocate storage volume to obtain sufficient performance.
As with all user-provisioned installations, if you choose to use RHEL compute machines in your cluster, you take responsibility for all operating system life cycle management and maintenance, including performing system updates, applying patches, and completing all other required tasks. Use of RHEL 7 compute machines is deprecated and has been removed in OpenShift Container Platform 4.10 and later.
If an instance type for your platform meets the minimum requirements for cluster machines, it is supported to use in OpenShift Container Platform.
The following Amazon Web Services (AWS) instance types have been tested with OpenShift Container Platform for use with AWS Local Zones.
Use the machine types included in the following charts for your AWS instances. If you use an instance type that is not listed in the chart, ensure that the instance size you use matches the minimum resource requirements that are listed in "Minimum resource requirements for cluster installation". |
c5.*
c5d.*
m6i.*
m5.*
r5.*
t3.*
See AWS Local Zones features in the AWS documentation for more information about AWS Local Zones and the supported instances types and services.
Generate and customize the installation configuration file that the installation program needs to deploy your cluster.
You obtained the OpenShift Container Platform installation program and the pull secret for your cluster.
You checked that you are deploying your cluster to a region with an accompanying Red Hat Enterprise Linux CoreOS (RHCOS) AMI published by Red Hat. If you are deploying to a region that requires a custom AMI, such as an AWS GovCloud region, you must create the install-config.yaml
file manually.
Create the install-config.yaml
file.
Change to the directory that contains the installation program and run the following command:
$ ./openshift-install create install-config --dir <installation_directory> (1)
1 | For <installation_directory> , specify the directory name to store the
files that the installation program creates. |
Specify an empty directory. Some installation assets, like bootstrap X.509 certificates have short expiration intervals, so you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. |
At the prompts, provide the configuration details for your cloud:
Optional: Select an SSH key to use to access your cluster machines.
For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your |
Select aws as the platform to target.
If you do not have an AWS profile stored on your computer, enter the AWS access key ID and secret access key for the user that you configured to run the installation program.
The AWS access key ID and secret access key are stored in |
Select the AWS region to deploy the cluster to. The region that you specify must be the same region that contains the Local Zone that you opted into for your AWS account.
Select the base domain for the Route 53 service that you configured for your cluster.
Enter a descriptive name for your cluster.
Paste the pull secret from the Red Hat OpenShift Cluster Manager.
Edit the install-config.yaml
file to provide the subnets for the availability zones that your VPC uses:
platform:
aws:
subnets: (1)
- publicSubnetId-1
- publicSubnetId-2
- publicSubnetId-3
- privateSubnetId-1
- privateSubnetId-2
- privateSubnetId-3
1 | Add the subnets section and specify the PrivateSubnetIds and PublicSubnetIds values from the outputs of the CloudFormation template for the VPC. Do not include the Local Zone subnets here. |
Optional: Back up the install-config.yaml
file.
The |
See Configuration and credential file settings in the AWS documentation for more information about AWS profile and credential configuration.
Because you must modify some cluster definition files and manually start the cluster machines, you must generate the Kubernetes manifest files that the cluster needs to configure the machines.
You obtained the OpenShift Container Platform installation program.
You created the install-config.yaml
installation configuration file.
You installed the jq
package.
Change to the directory that contains the OpenShift Container Platform installation program and generate the Kubernetes manifests for the cluster by running the following command:
$ ./openshift-install create manifests --dir <installation_directory> (1)
1 | For <installation_directory> , specify the installation directory that
contains the install-config.yaml file you created. |
Set the default Maximum Transmission Unit (MTU) according to the network plugin:
Generally, the Maximum Transmission Unit (MTU) between an Amazon EC2 instance in a Local Zone and an Amazon EC2 instance in the Region is 1300. See How Local Zones work in the AWS documentation. The cluster network MTU must be always less than the EC2 MTU to account for the overhead. The specific overhead is determined by your network plugin, for example:
The network plugin could provide additional features, like IPsec, that also must be decreased the MTU. Check the documentation for additional information. |
If you are using the OVN-Kubernetes
network plugin, enter the following command:
$ cat <<EOF > <installation_directory>/manifests/cluster-network-03-config.yml
apiVersion: operator.openshift.io/v1
kind: Network
metadata:
name: cluster
spec:
defaultNetwork:
ovnKubernetesConfig:
mtu: 1200
EOF
If you are using the OpenShift SDN
network plugin, enter the following command:
$ cat <<EOF > <installation_directory>/manifests/cluster-network-03-config.yml
apiVersion: operator.openshift.io/v1
kind: Network
metadata:
name: cluster
spec:
defaultNetwork:
openshiftSDNConfig:
mtu: 1250
EOF
Create the machine set manifests for the worker nodes in your Local Zone.
Export a local variable that contains the name of the Local Zone that you opted your AWS account into by running the following command:
$ export LZ_ZONE_NAME="<local_zone_name>" (1)
1 | For <local_zone_name> , specify the Local Zone that you opted your AWS account into, such as us-east-1-nyc-1a . |
Review the instance types for the location that you will deploy to by running the following command:
$ aws ec2 describe-instance-type-offerings \
--location-type availability-zone \
--filters Name=location,Values=${LZ_ZONE_NAME}
--region <region> (1)
1 | For <region> , specify the name of the region that you will deploy to, such as us-east-1 . |
Export a variable to define the instance type for the worker machines to deploy on the Local Zone subnet by running the following command:
$ export INSTANCE_TYPE="<instance_type>" (1)
1 | Set <instance_type> to a tested instance type, such as c5d.2xlarge . |
Store the AMI ID as a local variable by running the following command:
$ export AMI_ID=$(grep ami
<installation_directory>/openshift/99_openshift-cluster-api_worker-machineset-0.yaml \
| tail -n1 | awk '{print$2}')
Store the subnet ID as a local variable by running the following command:
$ export SUBNET_ID=$(aws cloudformation describe-stacks --stack-name "<subnet_stack_name>" \ (1)
| jq -r '.Stacks[0].Outputs[0].OutputValue')
1 | For <subnet_stack_name> , specify the name of the subnet stack that you created. |
Store the cluster ID as local variable by running the following command:
$ export CLUSTER_ID="$(awk '/infrastructureName: / {print $2}' <installation_directory>/manifests/cluster-infrastructure-02-config.yml)"
Create the worker manifest file for the Local Zone that your VPC uses by running the following command:
$ cat <<EOF > <installation_directory>/openshift/99_openshift-cluster-api_worker-machineset-nyc1.yaml
apiVersion: machine.openshift.io/v1beta1
kind: MachineSet
metadata:
labels:
machine.openshift.io/cluster-api-cluster: ${CLUSTER_ID}
name: ${CLUSTER_ID}-edge-${LZ_ZONE_NAME}
namespace: openshift-machine-api
spec:
replicas: 1
selector:
matchLabels:
machine.openshift.io/cluster-api-cluster: ${CLUSTER_ID}
machine.openshift.io/cluster-api-machineset: ${CLUSTER_ID}-edge-${LZ_ZONE_NAME}
template:
metadata:
labels:
machine.openshift.io/cluster-api-cluster: ${CLUSTER_ID}
machine.openshift.io/cluster-api-machine-role: edge
machine.openshift.io/cluster-api-machine-type: edge
machine.openshift.io/cluster-api-machineset: ${CLUSTER_ID}-edge-${LZ_ZONE_NAME}
spec:
metadata:
labels:
machine.openshift.com/zone-type: local-zone
machine.openshift.com/zone-group: ${ZONE_GROUP_NAME}
node-role.kubernetes.io/edge: ""
taints:
- key: node-role.kubernetes.io/edge
effect: NoSchedule
providerSpec:
value:
ami:
id: ${AMI_ID}
apiVersion: machine.openshift.io/v1beta1
blockDevices:
- ebs:
volumeSize: 120
volumeType: gp2
credentialsSecret:
name: aws-cloud-credentials
deviceIndex: 0
iamInstanceProfile:
id: ${CLUSTER_ID}-worker-profile
instanceType: ${INSTANCE_TYPE}
kind: AWSMachineProviderConfig
placement:
availabilityZone: ${LZ_ZONE_NAME}
region: ${CLUSTER_REGION}
securityGroups:
- filters:
- name: tag:Name
values:
- ${CLUSTER_ID}-worker-sg
subnet:
id: ${SUBNET_ID}
publicIp: true
tags:
- name: kubernetes.io/cluster/${CLUSTER_ID}
value: owned
userDataSecret:
name: worker-user-data
EOF
You can install OpenShift Container Platform on a compatible cloud platform.
You can run the |
Configure an account with the cloud platform that hosts your cluster.
Obtain the OpenShift Container Platform installation program and the pull secret for your cluster.
Verify the cloud provider account on your host has the correct permissions to deploy the cluster. An account with incorrect permissions causes the installation process to fail with an error message that displays the missing permissions.
Change to the directory that contains the installation program and initialize the cluster deployment:
$ ./openshift-install create cluster --dir <installation_directory> \ (1)
--log-level=info (2)
1 | For <installation_directory> , specify the
location of your customized ./install-config.yaml file. |
2 | To view different installation details, specify warn , debug , or
error instead of info . |
If the cloud provider account that you configured on your host does not have sufficient permissions to deploy the cluster, the installation process stops, and the missing permissions are displayed. |
Optional: Remove or disable the AdministratorAccess
policy from the IAM
account that you used to install the cluster.
The elevated permissions provided by the |
When the cluster deployment completes successfully:
The terminal displays directions for accessing your cluster, including a link to the web console and credentials for the kubeadmin
user.
Credential information also outputs to <installation_directory>/.openshift_install.log
.
Do not delete the installation program or the files that the installation program creates. Both are required to delete the cluster. |
...
INFO Install complete!
INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig'
INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com
INFO Login to the console with user: "kubeadmin", and password: "password"
INFO Time elapsed: 36m22s
|
You can install the OpenShift CLI (oc
) to interact with OpenShift Container Platform from a
command-line interface. You can install oc
on Linux, Windows, or macOS.
If you installed an earlier version of |
You can install the OpenShift CLI (oc
) binary on Linux by using the following procedure.
Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal.
Select the architecture from the Product Variant drop-down list.
Select the appropriate version from the Version drop-down list.
Click Download Now next to the OpenShift v4.12 Linux Client entry and save the file.
Unpack the archive:
$ tar xvf <file>
Place the oc
binary in a directory that is on your PATH
.
To check your PATH
, execute the following command:
$ echo $PATH
After you install the OpenShift CLI, it is available using the oc
command:
$ oc <command>
You can install the OpenShift CLI (oc
) binary on Windows by using the following procedure.
Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal.
Select the appropriate version from the Version drop-down list.
Click Download Now next to the OpenShift v4.12 Windows Client entry and save the file.
Unzip the archive with a ZIP program.
Move the oc
binary to a directory that is on your PATH
.
To check your PATH
, open the command prompt and execute the following command:
C:\> path
After you install the OpenShift CLI, it is available using the oc
command:
C:\> oc <command>
You can install the OpenShift CLI (oc
) binary on macOS by using the following procedure.
Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal.
Select the appropriate version from the Version drop-down list.
Click Download Now next to the OpenShift v4.12 macOS Client entry and save the file.
For macOS arm64, choose the OpenShift v4.12 macOS arm64 Client entry. |
Unpack and unzip the archive.
Move the oc
binary to a directory on your PATH.
To check your PATH
, open a terminal and execute the following command:
$ echo $PATH
After you install the OpenShift CLI, it is available using the oc
command:
$ oc <command>
You can log in to your cluster as a default system user by exporting the cluster kubeconfig
file.
The kubeconfig
file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server.
The file is specific to a cluster and is created during OpenShift Container Platform installation.
You deployed an OpenShift Container Platform cluster.
You installed the oc
CLI.
Export the kubeadmin
credentials:
$ export KUBECONFIG=<installation_directory>/auth/kubeconfig (1)
1 | For <installation_directory> , specify the path to the directory that you stored
the installation files in. |
Verify you can run oc
commands successfully using the exported configuration:
$ oc whoami
system:admin
The kubeadmin
user exists by default after an OpenShift Container Platform installation. You can log in to your cluster as the kubeadmin
user by using the OpenShift Container Platform web console.
You have access to the installation host.
You completed a cluster installation and all cluster Operators are available.
Obtain the password for the kubeadmin
user from the kubeadmin-password
file on the installation host:
$ cat <installation_directory>/auth/kubeadmin-password
Alternatively, you can obtain the |
List the OpenShift Container Platform web console route:
$ oc get routes -n openshift-console | grep 'console-openshift'
Alternatively, you can obtain the OpenShift Container Platform route from the |
console console-openshift-console.apps.<cluster_name>.<base_domain> console https reencrypt/Redirect None
Navigate to the route detailed in the output of the preceding command in a web browser and log in as the kubeadmin
user.
See Accessing the web console for more details about accessing and understanding the OpenShift Container Platform web console.
In OpenShift Container Platform 4.12, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager Hybrid Cloud Console.
After you confirm that your OpenShift Cluster Manager Hybrid Cloud Console inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level.
See About remote health monitoring for more information about the Telemetry service.
If necessary, you can opt out of remote health reporting.
If necessary, you can remove cloud provider credentials.