This is a cache of https://docs.openshift.com/dedicated/3/getting_started/access_your_services.html. It is a snapshot of the page at 2024-11-29T05:00:35.978+0000.
Acc<strong>e</strong>ssing Your S<strong>e</strong>rvic<strong>e</strong>s | G<strong>e</strong>tting Start<strong>e</strong>d | Op<strong>e</strong>nShift D<strong>e</strong>dicat<strong>e</strong>d 3
&times;

Access your cluster

Once your OpenShift Dedicated cluster is configured and ready to use, you can access it through the following paths:

  • Cluster ID: The unique cluster name provided by the customer during provisioning. It is lowercase, and only contains letters, numbers, and hyphens.

  • Console URL: The OpenShift Dedicated URL for the web console.

    https://console.<cluster-id>.openshift.com
  • API URL: The OpenShift Dedicated URL for the OpenShift and Kubernetes ReST API.

    https://api.<cluster-id>.openshift.com
  • Registry URL: The OpenShift Dedicated URL for the private image registry. In addition to containing all images used by OpenShift Dedicated, podman pull or docker pull and podman push or docker push can be used directly on the registry.

    https://registry.<cluster-id>.openshift.com
  • Metrics API URL: The OpenShift Dedicated URL for the Hawkular Metrics API.

    https://metrics.<cluster-id>.openshift.com
  • Logging URL: The OpenShift Dedicated URL for the aggregate logging Kibana interface.

    https://logs.<cluster-id>.openshift.com
  • If an authentication callback URL is necessary, you can configure it with:

    https://api.<cluster-id>.openshift.com/oauth2callback/<IdP name>

Configuring AWS virtual private clouds

If a virtual private cloud (VPC) peering connection was requested, the VPC peering request is initiated from the Red Hat OpenShift AWS account. First, accept the peering request , and then configure the Route Tables:

  1. Log in to your AWS Web Console.

  2. Select the Route Table for your VPC (VPCRoute Tables).

  3. Select the Routes tab.

  4. Click edit.

  5. enter the Dedicated Cluster VPC CIDR block in the Destination text box.

  6. enter the Peering Connection ID in the Target text box.

  7. Click Save.

Configure your application routes

When your cluster is provisioned, an AWS elastic load balancer (eLB) is created to route application traffic into the cluster. The domain for your eLB is configured to route application traffic via http(s)://*.<shard-id>.<cluster-id>.openshiftapps.com. The <shard-id> is a four-character string that is communicated to you after initial provisioning.

If you want to use custom domain names for your application routes, you should set up a CNAMe record in your DNS host to point to elb.<shard-id>.<cluster-id>.openshiftapps.com. While elb is recommended as a reminder for where this record is pointing, you can use any string for this value. You can create these CNAMe records for each custom route you have, or you can create a wildcard CNAMe record. For example:

*.openshift.example.com    CNAMe    elb.1234.my-example.openshiftapps.com

This allows you to create routes like app1.openshift.example.com and app2.openshift.example.com without having to update your DNS every time.

Customers with configured VPC peering or VPN connections have the option of requesting a second eLB, so that application routes can be configured as internal-only or externally available. The domain for this eLB will be identical to the first, with a different <shard-id> value. By default, application routes are handled by the internal-only router. To expose an application or service externally, you must create a new route with a specific label, route=external.

To expose a new route for an existing service, apply the label route=external and define a host name that contains the secondary, public router shard ID:

$ oc expose service <service-name> -l route=external --name=<custom-route-name> --hostname=<custom-hostname>.<shard-id>.<cluster-id>.openshiftapps.com

Alternatively, you can use a custom domain:

$ oc expose service <service-name> -l route=external --name=<custom-route-name> --hostname=<custom-domain>

expose TCP Services

OpenShift Dedicated routes expose applications by proxying traffic through HTTP/HTTPS(SNI)/TLS(SNI) to pods and services. A LoadBalancer service creates an AWS elastic Load Balancer (eLB) for your OpenShift Dedicated cluster, enabling direct TCP access to applications exposed by your LoadBalancer service.

LoadBalancer services require an additional purchase. Contact your sales team if you are interested in using LoadBalancer services for your OpenShift Dedicated cluster.

Check your LoadBalancer Quota

By purchasing LoadBalancer services, you are provided with a quota of LoadBalancers available for your OpenShift Dedicated cluster.

$ oc describe clusterresourcequota service-loadbalancers
Name:       service-loadbalancers
Labels:     <none>
Annotations:    <none>
Resource        Used    Hard
--------        ----    ----
services.loadbalancers  0   4

expose TCP Service

You can expose your applications over an external LoadBalancer service, enabling access over the public Internet.

$ oc expose dc httpd-example --type=LoadBalancer --name=lb-service
service/lb-service exposed

Create an Internal-Only TCP Service

You can alternatively expose your applications internally only, enabling access only through AWS VPC Peering or a VPN connection.

$ oc expose dc httpd-example --type=LoadBalancer --name=internal-lb --dry-run -o yaml | awk '1;/metadata:/{ print "  annotations:\n    service.beta.kubernetes.io/aws-load-balancer-internal: 0.0.0.0/0" }' | oc create -f -
service/internal-lb exposed

Use your TCP Service

Once your LoadBalancer service is created, you can access your service by using the URL provided to you by OpenShift Dedicated. The LoadBalancer Ingress value is a URL unique to your service that remains static as long as the service is not deleted. If you prefer to use a custom domain, you can create a CNAMe DNS record for this URL.

$ oc describe svc lb-service
Name:                     lb-service
Namespace:                default
Labels:                   app=httpd-example
Annotations:              <none>
Selector:                 name=httpd-example
Type:                     LoadBalancer
IP:                       10.120.182.252
LoadBalancer Ingress:     a5387ba36201e11e9ba901267fd7abb0-1406434805.us-east-1.elb.amazonaws.com
Port:                     <unset>  8080/TCP
TargetPort:               8080/TCP
NodePort:                 <unset>  31409/TCP
endpoints:                <none>
Session Affinity:         None
external Traffic Policy:  Cluster

OpenShift Dedicated monitoring tools

OpenShift Dedicated relies on three systems for providing important monitoring and cluster information to customers.

Access your OpenShift Dedicated portal

The first system is the OpenShift Dedicated Portal. This provides customers an overview of cluster information, including: utilized memory, utilized CPU, number of users, number of projects, subscription information, and maintenance information. Our SRe team also uses the OpenShift Dedicated Portal for scheduling and controlling maintenance events for clusters. The OpenShift Dedicated Portal is responsible for emailing Customer Admins, a group that can manage other users for an organization, when a maintenance event is available to be scheduled. It can also be used by Customer Admins to invite other users to view the clusters on the Dedicated Portal.

The memory and CPU metrics displayed on the OpenShift Dedicated Portal represent current machine usage and do not reflect actual schedulable resource availability.

Receive status updates

The second system is the Status Portal powered by StatusPage. There is an API link between the OpenShift Dedicated Portal and the Status Portal that allows maintenance events created and updated in the OpenShift Dedicated Portal to be reflected in the Status Portal. Once a maintenance event is created and scheduled in the OpenShift Dedicated Portal, the Status Portal is responsible for all emails related to that maintenance event going forward. The Status Portal can also be used manually for sending customers important information about their cluster, e.g., logging space low or expiring certificates.

You were initially invited to the Status Portal as part of the on-boarding process, and you should have received an invitation email. If your email has already been invited, you can also subscribe to notifications via, SMS, or RSS by changing your preferences in the Status Portal. To request additional emails be invited to the Status Portal, please create a support case requesting this.

If a mailing list is specified as a Status Portal subscriber, it’s possible that anyone on the mailing list may inadvertently unsubscribe the entire mailing list from receiving status updates. It is the customer’s responsibility to ensure that we always have at least one subscribed customer contact in the Status Portal.

Viewing cluster requests and limits

Finally, the third system is the built-in OpenShift Dedicated Grafana dashboard. You can access the Grafana dashboard from the Cluster Administrator console, under the Monitoring → Dashboards navigation link. This provides you with current and historical metrics regarding memory and CPU requests and limits with cluster-wide, node, namespace, and pod granularity.

Request support

If you have questions about your environment or need to open a support ticket, you can open or view a support case in the Red Hat Customer Portal.

Next steps

You can download the OpenShift Dedicated command line tools from your cluster’s web console. For help getting started with command line tools, see the Get Started with the CLI guide. You can also visit the Getting Started Guide for developers.

Dedicated cluster administrators should view the Cluster Administration Overview for detailed information on available roles and permissions. This section also includes important topics such as managing quotas and configuring service accounts.

If your cluster has been configured with the NetworkPolicy SDN, OpenShift Dedicated administrators are able to create and modify NetworkPolicy objects. NetworkPolicy, by default, allows traffic communication between projects. Denying traffic between projects can be enabled by the creation of two NetworkPolicy objects.