# az account list [ { "cloudName": "AzureCloud", "id": "<subscription>", (1) "isDefault": false, "name": "Pay-As-You-Go", "state": "Enabled", "tenantId": "<tenant-id>", "user": { "name": "admin@example.com", "type": "user" } ]
You can configure OpenShift Container Platform to use Microsoft Azure load balancers and disks for persistent application data.
Azure roles
Configuring Microsoft Azure for OpenShift Container Platform requires the following Microsoft Azure role:
Contributor |
To create and manage all types of Microsoft Azure resources. |
See the Classic subscription administrator roles vs. Azure RBAC roles vs. Azure AD administrator roles documentation for more information.
Permissions
Configuring Microsoft Azure for OpenShift Container Platform requires a service principal,
which allows the creation and management of Kubernetes service
load balancers and disks for persistent storage. The service principal values
are defined at installation time and deployed to the Azure configuration file, located at /etc/origin/cloudprovider/azure.conf
on OpenShift Container Platform master and node hosts.
Using the Azure CLI, obtain the account subscription ID:
# az account list [ { "cloudName": "AzureCloud", "id": "<subscription>", (1) "isDefault": false, "name": "Pay-As-You-Go", "state": "Enabled", "tenantId": "<tenant-id>", "user": { "name": "admin@example.com", "type": "user" } ]
1 | The subscription ID to use to create the new permissions. |
Create the service principal with the Microsoft Azure
role of contributor and with the scope of the Microsoft Azure subscription and
the resource group. Record the output of these values to be used when defining
the inventory. Use the <subscription>
value from the previous step in place of the value below:
# az ad sp create-for-rbac --name openshiftcloudprovider \ --password <secret> --role contributor \ --scopes /subscriptions/<subscription>/resourceGroups/<resource-group> Retrying role assignment creation: 1/36 Retrying role assignment creation: 2/36 { "appId": "<app-id>", "displayName": "ocpcloudprovider", "name": "http://ocpcloudprovider", "password": "<secret>", "tenant": "<tenant-id>" }
Integrating OpenShift Container Platform with Microsoft Azure requires the following components or services to create a highly-available and full-featured environment.
To ensure that the appropriate amount of instances can be launched, request an increase in CPU quota from Microsoft before creating instances. |
Resource groups contain all Microsoft Azure components for a deployment, including networking, load balancers, virtual machines, and DNS. Quotas and permissions can be applied to resources groups to control and manage resources deployed on Microsoft Azure. Resource groups are created and defined per geographic region. All resources created for an OpenShift Container Platform environment should be within the same geographic region and within the same resource group.
See Azure Resource Manager overview for more information.
Azure Virtual Networks are used to isolate Azure cloud networks from one another. Instances and load balancers use the virtual network to allow communication with each other and to and from the Internet. The virtual network allows for the creation of one or many subnets to be used by components within a resource group. You can also connect virtual networks to various VPN services, allowing communication with on-premise services.
See What is Azure Virtual Network? for more information.
Azure offers a managed DNS service that provides internal and Internet-accessible host name and load balancer resolution. The reference environment uses a DNS zone to host three DNS A records to allow for mapping of public IPs to OpenShift Container Platform resources and a bastion.
See What is Azure DNS? for more information.
Azure load balancers allow network connectivity for scaling and high availability of services running on virtual machines within the Azure environment.
Storage Accounts allow for resources, such as virtual machines, to access the
different type of storage components offered by Microsoft Azure. During
installation, the storage account defines the location of the object-based
blob
storage used for the OpenShift Container Platform registry.
See Introduction to Azure Storage for more information, or the Configuring the OpenShift Container Platform registry for Microsoft Azure section for steps to create the storage account for the registry.
Azure offers the ability to create service accounts, which access, manage, or create components within Azure. The service account grants API access to specific services. For example, a service principal allows Kubernetes or OpenShift Container Platform instances to request persistent storage and load balancers. Service principals allow for granular access to be given to instances or users for specific functions.
See Application and service principal objects in Azure Active Directory for more information.
Availability sets ensure that the deployed VMs are distributed across multiple isolated hardware nodes in a cluster. The distribution helps to ensure that when maintenance on the cloud provider hardware occurs, instances will not all run on one specific node.
You should segment instances to different availability sets based on their role. For example, one availability set containing three master hosts, one availability set containing infrastructure hosts, and one availability set containing application hosts. This allows for segmentation and the ability to use external load balancers within OpenShift Container Platform.
See Manage the availability of Linux virtual machines for more information.
Network Security Groups (NSGs) provide a list of rules to either allow or deny traffic to resources deployed within an Azure Virtual Network. NSGs use numeric priority values and rules to define what items are allowed to communicate with each other. You can place restrictions on where communication is allowed to occur, such as within only the virtual network, from load balancers, or from everywhere.
Priority values allow for administrators to grant granular values on the order in which port communication is allowed or not allowed to occur.
See Plan virtual networks for more information.
A successful OpenShift Container Platform environment requires some minimum hardware requirements.
See the Minimum Hadware Requirements section in the OpenShift Container Platform documentation or Sizes for Cloud Services for more information.
Configuring OpenShift Container Platform for Azure requires the /etc/azure/azure.conf file, on each node host.
If the file does not exist, you can create it.
tenantId: <> (1) subscriptionId: <> (2) aadClientId: <> (3) aadClientsecret: <> (4) aadTenantId: <> (5) resourceGroup: <> (6) cloud: <> (7) location: <> (8) vnetName: <> (9) securityGroupName: <> (10) primaryAvailabilitySetName: <> (11)
1 | The AAD tenant ID for the subscription that the cluster is deployed in. |
2 | The Azure subscription ID that the cluster is deployed in. |
3 | The client ID for an AAD application with RBAC access to talk to Azure RM APIs. |
4 | The client secret for an AAD application with RBAC access to talk to Azure RM APIs. |
5 | Ensure this is the same as tenant ID (optional). |
6 | The Azure Resource Group name that the Azure VM belongs to. |
7 | The specific cloud region. For example, AzurePublicCloud . |
8 | The compact style Azure region. For example, southeastasia (optional). |
9 | Virtual network containing instances and used when creating load balancers. |
10 | Security group name associated with instances and load balancers. |
11 | Availability set to use when creating resources such as load balancers (optional). |
The NIC used for accessing the instance must have an |
The example inventory below assumes that the following items have been created:
A resource group
An Azure virtual network
One or more network security groups that contain the required OpenShift Container Platform ports
A storage account
A service principal
Two load balancers
Two or more DNS entries for the routers and for the OpenShift Container Platform web console
Three Availability Sets
Three master instances
Three infrastructure instances
One or more application instances
The inventory below uses the default storageclass
to create persistent volumes
to be used by the metrics, logging, and service catalog components managed by a
service principal. The registry uses Microsoft Azure blob storage.
If the Microsoft Azure instances use managed disks, provide the following variable in the inventory:
or
This ensures the |
[OSEv3:children] masters etcd nodes [OSEv3:vars] ansible_ssh_user=cloud-user ansible_become=true openshift_cloudprovider_kind=azure #cloudprovider openshift_cloudprovider_kind=azure openshift_cloudprovider_azure_client_id=v9c97ead-1v7E-4175-93e3-623211bed834 openshift_cloudprovider_azure_client_secret=s3r3tR3gistryN0special openshift_cloudprovider_azure_tenant_id=422r3f91-21fe-4esb-vad5-d96dfeooee5d openshift_cloudprovider_azure_subscription_id=6003c1c9-d10d-4366-86cc-e3ddddcooe2d openshift_cloudprovider_azure_resource_group=openshift openshift_cloudprovider_azure_location=eastus #endcloudprovider openshift_master_api_port=443 openshift_master_console_port=443 openshift_hosted_router_replicas=3 openshift_hosted_registry_replicas=1 openshift_master_cluster_method=native openshift_master_cluster_hostname=openshift-master.example.com openshift_master_cluster_public_hostname=openshift-master.example.com openshift_master_default_subdomain=apps.openshift.example.com openshift_deployment_type=openshift-enterprise openshift_master_identity_providers=[{'name': 'idm', 'challenge': 'true', 'login': 'true', 'kind': 'LDAPPasswordIdentityProvider', 'attributes': {'id': ['dn'], 'email': ['mail'], 'name': ['cn'], 'preferredUsername': ['uid']}, 'bindDN': 'uid=admin,cn=users,cn=accounts,dc=example,dc=com', 'bindPassword': 'ldapadmin', 'ca': '/etc/origin/master/ca.crt', 'insecure': 'false', 'url': 'ldap://ldap.example.com/cn=users,cn=accounts,dc=example,dc=com?uid?sub?(memberOf=cn=ose-user,cn=groups,cn=accounts,dc=example,dc=com)'}] networkPluginName=redhat/ovs-networkpolicy openshift_examples_modify_imagestreams=true # Storage Class change to use managed storage openshift_storageclass_parameters={'kind': 'managed', 'storageaccounttype': 'Standard_LRS'} # service catalog openshift_enable_service_catalog=true openshift_hosted_etcd_storage_kind=dynamic openshift_hosted_etcd_storage_volume_name=etcd-vol openshift_hosted_etcd_storage_access_modes=["ReadWriteOnce"] openshift_hosted_etcd_storage_volume_size=SC_STORAGE openshift_hosted_etcd_storage_labels={'storage': 'etcd'} # metrics openshift_metrics_install_metrics=true openshift_metrics_cassandra_storage_type=dynamic openshift_metrics_storage_volume_size=20Gi openshift_metrics_hawkular_nodeselector={"node-role.kubernetes.io/infra": "true"} openshift_metrics_cassandra_nodeselector={"node-role.kubernetes.io/infra": "true"} openshift_metrics_heapster_nodeselector={"node-role.kubernetes.io/infra": "true"} # logging openshift_logging_install_logging=true openshift_logging_es_pvc_dynamic=true openshift_logging_storage_volume_size=50Gi openshift_logging_kibana_nodeselector={"node-role.kubernetes.io/infra": "true"} openshift_logging_curator_nodeselector={"node-role.kubernetes.io/infra": "true"} openshift_logging_es_nodeselector={"node-role.kubernetes.io/infra": "true"} # Setup azure blob registry storage openshift_hosted_registry_storage_kind=object openshift_hosted_registry_storage_azure_blob_accountkey=uZdkVlbca6xzwBqK8VDz15/loLUoc8I6cPfP31ZS+QOSxL6ylWT6CLrcadSqvtNTMgztxH4CGjYfVnRNUhvMiA== openshift_hosted_registry_storage_provider=azure_blob openshift_hosted_registry_storage_azure_blob_accountname=registry openshift_hosted_registry_storage_azure_blob_container=registry openshift_hosted_registry_storage_azure_blob_realm=core.windows.net [masters] ocp-master-1 ocp-master-2 ocp-master-3 [etcd] ocp-master-1 ocp-master-2 ocp-master-3 [nodes] ocp-master-1 openshift_node_group_name="node-config-master" ocp-master-2 openshift_node_group_name="node-config-master" ocp-master-3 openshift_node_group_name="node-config-master" ocp-infra-1 openshift_node_group_name="node-config-infra" ocp-infra-2 openshift_node_group_name="node-config-infra" ocp-infra-3 openshift_node_group_name="node-config-infra" ocp-app-1 openshift_node_group_name="node-config-compute"
You can configure OpenShift Container Platform for Microsoft Azure in two ways:
You can configure OpenShift Container Platform for Azure at installation time or by running the Ansible inventory file after installation.
Add the following to the Ansible inventory file located at /etc/ansible/hosts by default to configure your OpenShift Container Platform environment for Microsoft Azure:
[OSEv3:vars] openshift_cloudprovider_kind=azure openshift_cloudprovider_azure_client_id=<app_ID> (1) openshift_cloudprovider_azure_client_secret=<secret> (2) openshift_cloudprovider_azure_tenant_id=<tenant_ID> (3) openshift_cloudprovider_azure_subscription_id=<subscription> (4) openshift_cloudprovider_azure_resource_group=<resource_group> (5) openshift_cloudprovider_azure_location=<location> (6)
1 | The app ID value for the service principal. |
2 | The secret containing the password for the service principal. |
3 | The tenant in which the service principal exists. |
4 | The subscription used by the service principal. |
5 | The resource group where the service account exists. |
6 | The Microsoft Azure location where the resource group exists. |
Installing with Ansible also creates and configures the following files to fit your Microsoft Azure environment:
/etc/origin/cloudprovider/azure.conf
/etc/origin/master/master-config.yaml
/etc/origin/node/node-config.yaml
Perform the following on all master hosts.
Edit the master configuration file located at
/etc/origin/master/master-config.yaml
by default on all masters and update the
contents of the apiServerArguments
and controllerArguments
sections:
kubernetesMasterConfig:
...
apiServerArguments:
cloud-provider:
- "azure"
cloud-config:
- "/etc/origin/cloudprovider/azure.conf"
controllerArguments:
cloud-provider:
- "azure"
cloud-config:
- "/etc/origin/cloudprovider/azure.conf"
When triggering a containerized installation, only the /etc/origin and /var/lib/origin directories are mounted to the master and node container. Therefore, ensure master-config.yaml is in the /etc/origin/master directory instead of /etc/. |
When you configure OpenShift Container Platform for Microsoft Azure using Ansible, the /etc/origin/cloudprovider/azure.conf
file is created automatically.
Because you are manually configuring OpenShift Container Platform for Microsoft Azure, you must create the file on all node instances and include the following:
tenantId: <tenant_ID> (1) subscriptionId: <subscription> (2) aadClientId: <app_ID> (3) aadClientsecret: <secret> (4) aadTenantId: <tenant_ID> (5) resourceGroup: <resource_group> (6) location: <location> (7)
1 | The tenant in which the service principal exists. |
2 | The subscription used by the service principal. |
3 | The appID value for the service principal. |
4 | The secret containing the password for the service principal. |
5 | The tenant in which the service principal exists. |
6 | The resource group where the service account exists. |
7 | The Microsoft Azure location where the resource group exists. |
Restart the OpenShift Container Platform master services:
# master-restart api
# master-restart controllers
Perform the following on all node hosts.
Edit the appropriate node
configuration map and update the contents of the kubeletArguments
section:
kubeletArguments:
cloud-provider:
- "azure"
cloud-config:
- "/etc/origin/cloudprovider/azure.conf"
The NIC used for accessing the instance must have an internal DNS name set or
the node will not be able to rejoin the cluster, display build logs to the
console, and will cause |
Restart the OpenShift Container Platform services on all nodes:
# systemctl restart atomic-openshift-node
Microsoft Azure provides object cloud storage that OpenShift Container Platform can use to store container images using the OpenShift Container Platform container registry.
For more information, see Cloud Storage in the Azure documentation.
You can configure the registry either using Ansible or manually by configuring the registry configuration file.
You must create a storage account to host the registry images before installation. The following command creates a storage account which is used during installation for image storage:
You can use Microsoft Azure blob storage for storing container images. The OpenShift Container Platform registry uses blob storage to allow for the registry to grow dynamically in size without the need for intervention from an administrator.
Create an Azure storage account:
az storage account create --name <account_name> \ --resource-group <resource_group> \ --location <location> \ --sku Standard_LRS
This creates an account key. To view the account key:
az storage account keys list \ --account-name <account-name> \ --resource-group <resource-group> \ --output table KeyName Permissions Value key1 Full <account-key> key2 Full <extra-account-key>
Only one account key value is required for the configuration of the OpenShift Container Platform registry.
Option 1: Configuring the OpenShift Container Platform registry for Azure using Ansible
Configure the Ansible inventory for the registry to use the storage account:
[OSEv3:vars] # Azure Registry Configuration openshift_hosted_registry_replicas=1 (1) openshift_hosted_registry_storage_kind=object openshift_hosted_registry_storage_azure_blob_accountkey=<account_key> (2) openshift_hosted_registry_storage_provider=azure_blob openshift_hosted_registry_storage_azure_blob_accountname=<account_name> (3) openshift_hosted_registry_storage_azure_blob_container=<registry> (4) openshift_hosted_registry_storage_azure_blob_realm=core.windows.net
1 | The number of replicas to configure. |
2 | The account key associated with the <account-name>. |
3 | The storage account name. |
4 | Directory used to store the data. registry by default |
Option 2: Manually configuring OpenShift Container Platform registry for Microsoft Azure
To use Microsoft Azure object storage, edit the registry’s configuration file and mount to the registry pod.
Export the current config.yml:
$ oc get secret registry-config \
-o jsonpath='{.data.config\.yml}' -n default | base64 -d \
>> config.yml.old
Create a new configuration file from the old config.yml:
$ cp config.yml.old config.yml
Edit the file to include the Azure parameters:
storage:
delete:
enabled: true
cache:
blobdescriptor: inmemory
azure:
accountname: <account-name> (1)
accountkey: <account-key> (2)
container: registry (3)
realm: core.windows.net (4)
1 | Replace with the storage account name. |
2 | The account key associated to the <account-name>. |
3 | Directory used to store the data. registry by default |
4 | Storage realm core.windows.net by default |
Delete the registry-config
secret:
$ oc delete secret registry-config -n default
Recreate the secret to reference the updated configuration file:
$ oc create secret generic registry-config \
--from-file=config.yml -n default
Redeploy the registry to read the updated configuration:
$ oc rollout latest docker-registry -n default
Verifying the registry is using blob object storage
To verify if the registry is using Microsoft Azure blob storage:
After a successful registry deployment, the registry deploymentconfig
will always show that the registry is using an emptydir
instead of Microsoft Azure blob storage:
$ oc describe dc docker-registry -n default
...
Mounts:
...
/registry from registry-storage (rw)
Volumes:
registry-storage:
Type: EmptyDir (1)
...
1 | The temporary directory that shares a pod’s lifetime. |
Check if the /registry mount point is empty. This is the volume Microsoft Azure storage will use:
$ oc exec \
$(oc get pod -l deploymentconfig=docker-registry \
-o=jsonpath='{.items[0].metadata.name}') -i -t -- ls -l /registry
total 0
If it is empty, it is because the Microsoft Azure blob configuration is
performed in the registry-config
secret:
$ oc describe secret registry-config
Name: registry-config
Namespace: default
Labels: <none>
Annotations: <none>
Type: Opaque
Data
====
config.yml: 398 bytes
The installer creates a config.yml file with the desired configuration using the
extended registry capabilities as seen in Storage in the installation documentation. To view the configuration file, including the storage
section where the storage bucket configuration is stored:
$ oc exec \
$(oc get pod -l deploymentconfig=docker-registry \
-o=jsonpath='{.items[0].metadata.name}') \
cat /etc/registry/config.yml
version: 0.1
log:
level: debug
http:
addr: :5000
storage:
delete:
enabled: true
cache:
blobdescriptor: inmemory
azure:
accountname: registry
accountkey: uZekVBJBa6xzwAqK8EDz15/hoHUoc8I6cPfP31ZS+QOSxLfo7WT7CLrVPKaqvtNTMgztxH7CGjYfpFRNUhvMiA==
container: registry
realm: core.windows.net
auth:
openshift:
realm: openshift
middleware:
registry:
- name: openshift
repository:
- name: openshift
options:
pullthrough: True
acceptschema2: True
enforcequota: False
storage:
- name: openshift
Or you can view the secret:
$ oc get secret registry-config -o jsonpath='{.data.config\.yml}' | base64 -d version: 0.1 log: level: debug http: addr: :5000 storage: delete: enabled: true cache: blobdescriptor: inmemory azure: accountname: registry accountkey: uZekVBJBa6xzwAqK8EDz15/hoHUoc8I6cPfP31ZS+QOSxLfo7WT7CLrVPKaqvtNTMgztxH7CGjYfpFRNUhvMiA== container: registry realm: core.windows.net auth: openshift: realm: openshift middleware: registry: - name: openshift repository: - name: openshift options: pullthrough: True acceptschema2: True enforcequota: False storage: - name: openshift
If using an emptyDir
volume, the /registry
mountpoint looks like the
following:
$ oc exec \ $(oc get pod -l deploymentconfig=docker-registry \ -o=jsonpath='{.items[0].metadata.name}') -i -t -- df -h /registry Filesystem Size Used Avail Use% Mounted on /dev/sdc 30G 226M 30G 1% /registry $ oc exec \ $(oc get pod -l deploymentconfig=docker-registry \ -o=jsonpath='{.items[0].metadata.name}') -i -t -- ls -l /registry total 0 drwxr-sr-x. 3 1000000000 1000000000 22 Jun 19 12:24 docker
OpenShift Container Platform can use Microsoft Azure storage using persistent volumes mechanisms. OpenShift Container Platform creates the disk in the resource group and attaches the disk to the correct instance.
The following storageclass
is created when you configure the Azure cloud
provider at installation using the openshift_cloudprovider_kind=azure
and
openshift_cloud_provider_azure
variables in the Ansible inventory:
$ oc get --export storageclass azure-standard -o yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
annotations:
storageclass.kubernetes.io/is-default-class: "true"
creationTimestamp: null
name: azure-standard
parameters:
kind: Shared
storageaccounttype: Standard_LRS
provisioner: kubernetes.io/azure-disk
reclaimPolicy: Delete
volumeBindingMode: Immediate
If you did not use Ansible to enable OpenShift Container Platform and Microsoft Azure integration, you can create the storageclass
manually. See the Dynamic provisioning and creating storage classes section for more information.
Currently, the default storageclass
kind is shared
which means that the
Microsoft Azure instances must use unmanaged disks. You can optionally modify
this by allowing instances to use managed disks by providing the
openshift_storageclass_parameters={'kind': 'Managed', 'storageaccounttype':
'Premium_LRS'}
or openshift_storageclass_parameters={'kind': 'Managed',
'storageaccounttype': 'Standard_LRS'}
variables in the Ansible inventory file
at installation.
Microsoft Azure disks are |
Red Hat OpenShift Container Storage (RHOCS) is a provider of agnostic persistent storage for OpenShift Container Platform either in-house or in hybrid clouds. As a Red Hat storage solution, RHOCS is completely integrated with OpenShift Container Platform for deployment, management, and monitoring regardless if it is installed on OpenShift Container Platform (converged) or with OpenShift Container Platform (independent). OpenShift Container Storage is not limited to a single availability zone or node, which makes it likely to survive an outage. You can find complete instructions for using RHOCS in the RHOCS 3.10 Deployment Guide.
OpenShift Container Platform can leverage the Microsoft Azure load balancer by
exposing services externally using a LoadBalancer
service. OpenShift Container Platform
creates the load balancer in Microsoft Azure and creates the proper firewall
rules.
Currently, a bug causes extra variables to be included in the Microsoft Azure infrastructure when using it as a cloud provider and when using it as an external load balancer. See the following for more information: |
Ensure the the Azure configuration file located at /etc/origin/cloudprovider/azure.conf is correctly configured with the appropriate objects. See the Manually configuring OpenShift Container Platform for Microsoft Azure section for an example /etc/origin/cloudprovider/azure.conf file.
Once the values are added, restart the OpenShift Container Platform services on all hosts:
# systemctl restart atomic-openshift-node
# master-restart api
# master-restart controllers
Create a new application:
$ oc new-app openshift/hello-openshift
Expose the load balancer service:
$ oc expose dc hello-openshift --name='hello-openshift-external' --type='LoadBalancer'
This creates a Loadbalancer
service similar to the following:
apiVersion: v1
kind: Service
metadata:
labels:
app: hello-openshift
name: hello-openshift-external
spec:
externalTrafficPolicy: Cluster
ports:
- name: port-1
nodePort: 30714
port: 8080
protocol: TCP
targetPort: 8080
- name: port-2
nodePort: 30122
port: 8888
protocol: TCP
targetPort: 8888
selector:
app: hello-openshift
deploymentconfig: hello-openshift
sessionAffinity: None
type: LoadBalancer
Verify that the service has been created:
$ oc get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE hello-openshift ClusterIP 172.30.223.255 <none> 8080/TCP,8888/TCP 1m hello-openshift-external LoadBalancer 172.30.99.54 40.121.42.180 8080:30714/TCP,8888:30122/TCP 4m
The LoadBalancer
type and External-IP
fields indicate that the service is
using Microsoft Azure load balancers to expose the application.
This creates the following required objects in the Azure infrastructure:
A load balancer:
az network lb list -o table
Location Name ProvisioningState ResourceGroup ResourceGuid
---------- ----------- ------------------- --------------- ------------------------------------
eastus kubernetes Succeeded refarch-azr 30ec1980-b7f5-407e-aa4f-e570f06f168d
eastus OcpMasterLB Succeeded refarch-azr acb537b2-8a1a-45d2-aae1-ea9eabfaea4a
eastus OcpRouterLB Succeeded refarch-azr 39087c4c-a5dc-457e-a5e6-b25359244422
To verify that the load balancer is properly configured, run the following from an external host:
$ curl 40.121.42.180:8080 (1)
Hello OpenShift!
1 | Replace with the values from the EXTERNAL-IP verification step above as well as the port number. |