This is a cache of https://docs.openshift.com/rosa/cloud_experts_tutorials/cloud-experts-getting-started/cloud-experts-getting-started-deploying/cloud-experts-getting-started-hcp.html. It is a snapshot of the page at 2024-11-21T03:02:08.893+0000.
HCP deployment guide - Getting started with ROSA | Tutorials | Red Hat OpenShift Service on AWS
×

Follow this workshop to deploy a sample Red Hat OpenShift Service on AWS (ROSA) with hosted control planes (HCP) cluster. You can then use your cluster in the next tutorials.

Tutorial objectives
  • Learn to create your cluster prerequisites:

    • Create a sample virtual private cloud (VPC)

    • Create sample OpenID Connect (OIDC) resources

  • Create sample environment variables

  • Deploy a sample ROSA cluster

Prerequisites
  • ROSA version 1.2.31 or later

  • Amazon Web Service (AWS) command line interface (CLI)

  • ROSA CLI (rosa)

Creating your cluster prerequisites

Before deploying a ROSA with HCP cluster, you must have both a VPC and OIDC resources. We will create these resources first. ROSA uses the bring your own VPC (BYO-VPC) model.

Creating a VPC

  1. Make sure your AWS CLI (aws) is configured to use a region where ROSA is available. See the regions supported by the AWS CLI by running the following command:

    $ rosa list regions --hosted-cp
  2. Create the VPC. For this tutorial, the following script creates the VPC and its required components. It uses the region configured in your aws CLI.

    #!/bin/bash
    
    set -e
    ##########
    # This script will create the network requirements for a ROSA cluster. This will be
    # a public cluster. This creates:
    # - VPC
    # - Public and private subnets
    # - Internet Gateway
    # - Relevant route tables
    # - NAT Gateway
    #
    # This will automatically use the region configured for the aws cli
    #
    ##########
    
    VPC_CIDR=10.0.0.0/16
    PUBLIC_CIDR_SUBNET=10.0.1.0/24
    PRIVATE_CIDR_SUBNET=10.0.0.0/24
    
    # Create VPC
    echo -n "Creating VPC..."
    VPC_ID=$(aws ec2 create-vpc --cidr-block $VPC_CIDR --query Vpc.VpcId --output text)
    
    # Create tag name
    aws ec2 create-tags --resources $VPC_ID --tags Key=Name,Value=$CLUSTER_NAME
    
    # Enable dns hostname
    aws ec2 modify-vpc-attribute --vpc-id $VPC_ID --enable-dns-hostnames
    echo "done."
    
    # Create Public Subnet
    echo -n "Creating public subnet..."
    PUBLIC_SUBNET_ID=$(aws ec2 create-subnet --vpc-id $VPC_ID --cidr-block $PUBLIC_CIDR_SUBNET --query Subnet.SubnetId --output text)
    
    aws ec2 create-tags --resources $PUBLIC_SUBNET_ID --tags Key=Name,Value=$CLUSTER_NAME-public
    echo "done."
    
    # Create private subnet
    echo -n "Creating private subnet..."
    PRIVATE_SUBNET_ID=$(aws ec2 create-subnet --vpc-id $VPC_ID --cidr-block $PRIVATE_CIDR_SUBNET --query Subnet.SubnetId --output text)
    
    aws ec2 create-tags --resources $PRIVATE_SUBNET_ID --tags Key=Name,Value=$CLUSTER_NAME-private
    echo "done."
    
    # Create an internet gateway for outbound traffic and attach it to the VPC.
    echo -n "Creating internet gateway..."
    IGW_ID=$(aws ec2 create-internet-gateway --query InternetGateway.InternetGatewayId --output text)
    echo "done."
    
    aws ec2 create-tags --resources $IGW_ID --tags Key=Name,Value=$CLUSTER_NAME
    
    aws ec2 attach-internet-gateway --vpc-id $VPC_ID --internet-gateway-id $IGW_ID > /dev/null 2>&1
    echo "Attached IGW to VPC."
    
    # Create a route table for outbound traffic and associate it to the public subnet.
    echo -n "Creating route table for public subnet..."
    PUBLIC_route_TABLE_ID=$(aws ec2 create-route-table --vpc-id $VPC_ID --query routeTable.routeTableId --output text)
    
    aws ec2 create-tags --resources $PUBLIC_route_TABLE_ID --tags Key=Name,Value=$CLUSTER_NAME
    echo "done."
    
    aws ec2 create-route --route-table-id $PUBLIC_route_TABLE_ID --destination-cidr-block 0.0.0.0/0 --gateway-id $IGW_ID > /dev/null 2>&1
    echo "Created default public route."
    
    aws ec2 associate-route-table --subnet-id $PUBLIC_SUBNET_ID --route-table-id $PUBLIC_route_TABLE_ID > /dev/null 2>&1
    echo "Public route table associated"
    
    # Create a NAT gateway in the public subnet for outgoing traffic from the private network.
    echo -n "Creating NAT Gateway..."
    NAT_IP_ADDRESS=$(aws ec2 allocate-address --domain vpc --query AllocationId --output text)
    
    NAT_GATEWAY_ID=$(aws ec2 create-nat-gateway --subnet-id $PUBLIC_SUBNET_ID --allocation-id $NAT_IP_ADDRESS --query NatGateway.NatGatewayId --output text)
    
    aws ec2 create-tags --resources $NAT_IP_ADDRESS --resources $NAT_GATEWAY_ID --tags Key=Name,Value=$CLUSTER_NAME
    sleep 10
    echo "done."
    
    # Create a route table for the private subnet to the NAT gateway.
    echo -n "Creating a route table for the private subnet to the NAT gateway..."
    PRIVATE_route_TABLE_ID=$(aws ec2 create-route-table --vpc-id $VPC_ID --query routeTable.routeTableId --output text)
    
    aws ec2 create-tags --resources $PRIVATE_route_TABLE_ID $NAT_IP_ADDRESS --tags Key=Name,Value=$CLUSTER_NAME-private
    
    aws ec2 create-route --route-table-id $PRIVATE_route_TABLE_ID --destination-cidr-block 0.0.0.0/0 --gateway-id $NAT_GATEWAY_ID > /dev/null 2>&1
    
    aws ec2 associate-route-table --subnet-id $PRIVATE_SUBNET_ID --route-table-id $PRIVATE_route_TABLE_ID > /dev/null 2>&1
    
    echo "done."
    
    # echo "***********VARIABLE VALUES*********"
    # echo "VPC_ID="$VPC_ID
    # echo "PUBLIC_SUBNET_ID="$PUBLIC_SUBNET_ID
    # echo "PRIVATE_SUBNET_ID="$PRIVATE_SUBNET_ID
    # echo "PUBLIC_route_TABLE_ID="$PUBLIC_route_TABLE_ID
    # echo "PRIVATE_route_TABLE_ID="$PRIVATE_route_TABLE_ID
    # echo "NAT_GATEWAY_ID="$NAT_GATEWAY_ID
    # echo "IGW_ID="$IGW_ID
    # echo "NAT_IP_ADDRESS="$NAT_IP_ADDRESS
    
    echo "Setup complete."
    echo ""
    echo "To make the cluster create commands easier, please run the following commands to set the environment variables:"
    echo "export PUBLIC_SUBNET_ID=$PUBLIC_SUBNET_ID"
    echo "export PRIVATE_SUBNET_ID=$PRIVATE_SUBNET_ID"
    Additional resources
  3. The script outputs commands. Set the commands as environment variables to store the subnet IDs for later use. Copy and run the commands:

    $ export PUBLIC_SUBNET_ID=$PUBLIC_SUBNET_ID
    $ export PRIVATE_SUBNET_ID=$PRIVATE_SUBNET_ID
  4. Confirm your environment variables by running the following command:

    $ echo "Public Subnet: $PUBLIC_SUBNET_ID"; echo "Private Subnet: $PRIVATE_SUBNET_ID"
    Example output
    Public Subnet: subnet-0faeeeb0000000000
    Private Subnet: subnet-011fe340000000000

Creating your OIDC configuration

In this tutorial, we will use the automatic mode when creating the OIDC configuration. We will also store the OIDC ID as an environment variable for later use. The command uses the ROSA CLI to create your cluster’s unique OIDC configuration.

  • Create the OIDC configuration by running the following command:

    $ export OIDC_ID=$(rosa create oidc-config --mode auto --managed --yes -o json | jq -r '.id')

Creating additional environment variables

  • Run the following command to set up environment variables. These variables make it easier to run the command to create a ROSA cluster:

    $ export CLUSTER_NAME=<cluster_name>
    $ export REGION=<VPC_region>

    Run rosa whoami to find the VPC region.

Creating a cluster

  1. Optional: Run the following command to create the account-wide roles and policies, including the Operator policies and the AWS IAM roles and policies:

    Only complete this step if this is the first time you are deploying ROSA in this account and you have not yet created your account roles and policies.

    $ rosa create account-roles --mode auto --yes
  2. Run the following command to create the cluster:

    $ rosa create cluster --cluster-name $CLUSTER_NAME \
    --subnet-ids ${PUBLIC_SUBNET_ID},${PRIVATE_SUBNET_ID} \
    --hosted-cp \
    --region $REGION \
    --oidc-config-id $OIDC_ID \
    --sts --mode auto --yes

The cluster is ready after about 10 minutes. The cluster will have a control plane across three AWS availability zones in your selected region and create two worker nodes in your AWS account.

Checking the installation status

  1. Run one of the following commands to check the status of the cluster:

    • For a detailed view of the cluster status, run:

      $ rosa describe cluster --cluster $CLUSTER_NAME
    • For an abridged view of the cluster status, run:

      $ rosa list clusters
    • To watch the log as it progresses, run:

      $ rosa logs install --cluster $CLUSTER_NAME --watch
  2. Once the state changes to “ready” your cluster is installed. It might take a few more minutes for the worker nodes to come online.